User variance and its impact on video retrieval benchmarking

Peter Wilkins, Raphael Troncy, Martin Halvey, Daragh Byrne, Alia Amin, P Punitha, Alan F. Smeaton, Robert Villa

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    In this paper, we describe one of the largest multi-site interactive video retrieval experiments conducted in a laboratory setting. Interactive video retrieval performance is difficult to cross-compare as variables exist across users, interfaces and the underlying retrieval engine. Conducted within the framework of TRECVID 2008, we completed a multi-site, multi-interface experiment. Three institutes participated involving 36 users, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands). Three user interfaces were developed which all used the same search service. Using a latin squares arrangement, each user completed 12 topics, leading to 6 TRECVID runs per site, 18 in total.
    Original languageEnglish
    Title of host publicationProceedings of the ACM International Conference on Image and Video Retrieval (CIVR) 2009
    PublisherACM
    Number of pages8
    ISBN (Print)9781605584805
    DOIs
    Publication statusPublished - 2009

    Keywords

    • TRECVID
    • user variance
    • video retrieval

    Fingerprint Dive into the research topics of 'User variance and its impact on video retrieval benchmarking'. Together they form a unique fingerprint.

  • Cite this

    Wilkins, P., Troncy, R., Halvey, M., Byrne, D., Amin, A., Punitha, P., Smeaton, A. F., & Villa, R. (2009). User variance and its impact on video retrieval benchmarking. In Proceedings of the ACM International Conference on Image and Video Retrieval (CIVR) 2009 ACM. https://doi.org/10.1145/1646396.1646400