SSOAR Logo
    • Deutsch
    • English
  • English 
    • Deutsch
    • English
  • Login
SSOAR ▼
  • Home
  • About SSOAR
  • Guidelines
  • Publishing in SSOAR
  • Cooperating with SSOAR
    • Cooperation models
    • Delivery routes and formats
    • Projects
  • Cooperation partners
    • Information about cooperation partners
  • Information
    • Possibilities of taking the Green Road
    • Grant of Licences
    • Download additional information
  • Operational concept
Browse and search Add new document OAI-PMH interface
JavaScript is disabled for your browser. Some features of this site may not work without it.

Download PDF
Download full text

(194.2Kb)

Citation Suggestion

Please use the following Persistent Identifier (PID) to cite this document:
https://nbn-resolving.org/urn:nbn:de:0168-ssoar-66084-2

Exports for your reference manager

Bibtex export
Endnote export

Display Statistics
Share
  • Share via E-Mail E-Mail
  • Share via Facebook Facebook
  • Share via Bluesky Bluesky
  • Share via Reddit reddit
  • Share via Linkedin LinkedIn
  • Share via XING XING

Speaker trait characterization in web videos: Uniting speech, language, and facial features

[conference paper]

Weninger, Felix
Wagner, Claudia
Wöllmer, Martin
Schuller, Björn
Morency, Louis-Philipp

Abstract

We present a multi-modal approach to speaker characterization using acoustic, visual and linguistic features. Full realism is provided by evaluation on a database of real-life web videos and automatic feature extraction including face and eye detection, and automatic speech recognition. Differen... view more

We present a multi-modal approach to speaker characterization using acoustic, visual and linguistic features. Full realism is provided by evaluation on a database of real-life web videos and automatic feature extraction including face and eye detection, and automatic speech recognition. Different segmentations are evaluated for the audio and video streams, and the statistical relevance of Linguistic Inquiry and Word Count (LIWC) features is confirmed. In the result, late multimodal fusion delivers 73, 92 and 73% average recall in binary age, gender and race classification on unseen test subjects, outperforming the best single modalities for age and race.... view less

Keywords
video; video clip; recording; computational linguistics; Internet; evaluation; social media; experiment; audiovisual media

Classification
Natural Science and Engineering, Applied Sciences

Free Keywords
speaker classification; computational paralinguistics; multi-modal fusion; Linguistic Inquiry and Word Count; LIWC

Collection Title
Proceedings of the 38th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013)

Conference
38. International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013). Vancouver, 2013

Document language
English

Publication Year
2013

Publisher
IEEE

Page/Pages
p. 3647-3651

DOI
https://doi.org/10.1109/ICASSP.2013.6638338

ISSN
2379-190X

ISBN
978-1-4799-0356-6

Status
Published Version; peer reviewed

Licence
Deposit Licence - No Redistribution, No Modifications


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.
 

 


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.