Ir directamente a la navegación principal Ir directamente a la búsqueda Ir directamente al contenido principal

APES: Audiovisual person search in untrimmed video

  • Juan Leon Alcazar
  • , Fabian Caba Heilbron
  • , Long Mai
  • , Federico Perazzi
  • , Joon Young Lee
  • , Pablo Arbelaez
  • , Bernard Ghanem

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

5 Citas (Scopus)

Resumen

Humans are arguably one of the most important subjects in video streams, many real-world applications such as video summarization or video editing workflows often require the automatic search and retrieval of a person of interest. Despite tremendous efforts in the person re-identification and retrieval domains, few works have developed audiovisual search strategies. In this paper, we present the Audiovisual Person Search dataset (APES), a new dataset composed of untrimmed videos whose audio (voices) and visual (faces) streams are densely annotated. APES contains over 1.9K identities labeled along 36 hours of video, making it the largest dataset available for untrimmed audiovisual person search. A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity. To showcase the potential of our new dataset, we propose an audiovisual baseline and benchmark for person retrieval. Our study shows that modeling audiovisual cues benefits the recognition of people's identities.

Idioma originalInglés
Páginas (desde-hasta)1720-1729
Número de páginas10
PublicaciónIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
DOI
EstadoPublicada - jun. 2021
Evento2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 - Virtual, Online, Estados Unidos
Duración: 19 jun. 202125 jun. 2021

Nota bibliográfica

Publisher Copyright:
© 2021 IEEE.

Huella

Profundice en los temas de investigación de 'APES: Audiovisual person search in untrimmed video'. En conjunto forman una huella única.

Citar esto