Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval
http://data.open.ac.uk/oro/40780
is a Article , Academic article

Outgoing links

Property Object
Date 2014-01
Is part of repository
Status Peer reviewed
URI
  • http://data.open.ac.uk/oro/document/255384
  • http://data.open.ac.uk/oro/document/255385
  • http://data.open.ac.uk/oro/document/255386
  • http://data.open.ac.uk/oro/document/255387
  • http://data.open.ac.uk/oro/document/255388
  • http://data.open.ac.uk/oro/document/255389
  • http://data.open.ac.uk/oro/document/256304
Volume 8325
Abstract The Bag-of-visual-Words (BovW) model is one of the most popular visual content representation methods for large-scale contentbased video retrieval. The visual words are quantized according to a visual vocabulary, which is generated by a visual features clustering process (e.g. K-means, GMM, etc). In principle, two types of errors can occur in the quantization process. They are referred to as the <i>UnderQuantize</i> and <i>OverQuantize</i> problems. The former causes ambiguities and often leads to false visual content matches, while the latter generates synonyms and may lead to missing true matches. Unlike most state-of-the-art research that concentrated on enhancing the BovW model by disambiguating the visual words, in this paper, we aim to address the <i>OverQuantize</i> problem by incorporating the similarity of spatial-temporal contexts associated to pair-wise visual words. The visual words with similar context and appearance are assumed to be synonyms. These synonyms in the initial visual vocabulary are then merged to rebuild a more compact and descriptive vocabulary. Our approach was evaluated on the TRECVID2002 and CC WEB VIDEO datasets for two typical Query-By-Example (QBE) video retrieval applications. Experimental results demonstrated substantial improvements in retrieval performance over the initial visual vocabulary generated by the BovW model. We also show that our approach can be utilized in combination with the state-of-the-art disambiguation method to further improve the performance of the QBE video retrieval.
Authors authors
Type
Label Wang, Lei; Eylan, Eyad and Song, Dawei (2014). Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval. In: Multimedia Modelling: 20th Anniversary International Conference, MMM 2014, Dublin, Ireland, January 6-10, 2014, Proceedings, Part I, Lecture Notes in Computer Science, Springer International Publishing, pp. 74–85.
Title Rebuilding visual vocabulary via spatial-temporal context similarity for video retrieval
Dataset Open Research Online
Creator
Publisher Springer International Publishing
At The 20th Anniversary International Conference on MultiMedia Modeling