Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/10830
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCifani, Simoneen_UK
dc.contributor.authorAbel, Andrewen_UK
dc.contributor.authorHussain, Amiren_UK
dc.contributor.authorSquartini, Stefanoen_UK
dc.contributor.authorPiazza, Francescoen_UK
dc.contributor.editorEsposito, Aen_UK
dc.contributor.editorVích, Ren_UK
dc.date.accessioned2017-08-12T01:40:05Z-
dc.date.available2017-08-12T01:40:05Zen_UK
dc.date.issued2009en_UK
dc.identifier.urihttp://hdl.handle.net/1893/10830-
dc.description.abstractAs evidence of a link between the various human communication production domains has become more prominent in the last decade, the field of multimodal speech processing has undergone significant expansion. Many different specialised processing methods have been developed to attempt to analyze and utilize the complex relationship between multimodal data streams. This work uses information extracted from an audiovisual corpus to investigate and assess the correlation between audio and visual features in speech. A number of different feature extraction techniques are assessed, with the intention of identifying the visual technique that maximizes the audiovisual correlation. Additionally, this paper aims to demonstrate that a noisy and reverberant audio environment reduces the degree of audiovisual correlation, and that the application of a beamformer remedies this. Experimental results, carried out in a synthetic scenario, confirm the positive impact of beamforming not only for improving the audio-visual correlation but also in a complete audio-visual speech enhancement scheme. Thus, this work inevitably highlights an important aspect for the development of future promising bimodal speech enhancement systems.en_UK
dc.language.isoenen_UK
dc.publisherSpringer-Verlagen_UK
dc.relationCifani S, Abel A, Hussain A, Squartini S & Piazza F (2009) An investigation into audiovisual speech correlation in reverberant noisy environments. In: Esposito A & Vích R (eds.) Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions: COST Action 2102 International Conference Prague, Czech Republic, October 2008. Lecture Notes in Computer Science, 5641. Berlin, Germany: Springer-Verlag, pp. 331-343. http://www.springer.com/computer/image+processing/book/978-3-642-03319-3; https://doi.org/10.1007/978-3-642-03320-9_31en_UK
dc.relation.ispartofseriesLecture Notes in Computer Science, 5641en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.titleAn investigation into audiovisual speech correlation in reverberant noisy environmentsen_UK
dc.typePart of book or chapter of booken_UK
dc.rights.embargodate3000-12-01en_UK
dc.rights.embargoreason[Abel_2009_An_Investigation_into_Audiovisual_Speech_Correlation.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.identifier.doi10.1007/978-3-642-03320-9_31en_UK
dc.citation.issn0302-9743en_UK
dc.citation.spage331en_UK
dc.citation.epage343en_UK
dc.citation.publicationstatusPublisheden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.identifier.urlhttp://www.springer.com/computer/image+processing/book/978-3-642-03319-3en_UK
dc.author.emailaka@cs.stir.ac.uken_UK
dc.citation.btitleCross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions: COST Action 2102 International Conference Prague, Czech Republic, October 2008en_UK
dc.citation.isbn978-3642033193en_UK
dc.publisher.addressBerlin, Germanyen_UK
dc.contributor.affiliationMarche Polytechnic Universityen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationMarche Polytechnic Universityen_UK
dc.contributor.affiliationMarche Polytechnic Universityen_UK
dc.identifier.wtid735625en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dcterms.dateAccepted2009-12-31en_UK
dc.date.filedepositdate2013-02-06en_UK
rioxxterms.typeBook chapteren_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorCifani, Simone|en_UK
local.rioxx.authorAbel, Andrew|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.authorSquartini, Stefano|en_UK
local.rioxx.authorPiazza, Francesco|en_UK
local.rioxx.projectInternal Project|University of Stirling|https://isni.org/isni/0000000122484331en_UK
local.rioxx.contributorEsposito, A|en_UK
local.rioxx.contributorVích, R|en_UK
local.rioxx.freetoreaddate3000-12-01en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filenameAbel_2009_An_Investigation_into_Audiovisual_Speech_Correlation.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-3642033193en_UK
Appears in Collections:Computing Science and Mathematics Book Chapters and Sections

Files in This Item:
File Description SizeFormat 
Abel_2009_An_Investigation_into_Audiovisual_Speech_Correlation.pdfFulltext - Published Version455.12 kBAdobe PDFUnder Embargo until 3000-12-01    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.