Visual Speech Processing - Winter Term 2017/2018
Administrative Info
Lecture by Guest Lecturer Helen Bear
Type: Lecture IN3200
Programs: Informatics (Bachelor, Master)
Biomedical Computing (Master)
Computational Science and Engineering (Master)
Wirtschaftsinformatik (Bachelor, Master)
SWS: 4
ECTS: 5 Credits
Course Language: English
Contact: h.bear@surrey.ac.uk (on appointment, send an email beforehand)
|
Time, Location & Requirements
Course information and certificate requirements:
- The classes as well as the exercises will be held in English.
- Software requirements: Unity, FFmpeg, HTK, Matlab
|
Site Content
|
Announcements
- 15/08: Registration on TUMOnline is possible
- 01/09: Lectures will be held Monday's and Thursday's from 1pm to 2:30pm
|
Overview
It is a common misconception that speech is only about the comprehension of sounds. In reality, we also use visual cues to understand what someone is saying. This means articulated speech is bimodal, meaning there are two channels of information; acoustic and visual.
Research in recognising acoustic speech is maturing, think of Siri and Cortana. However, research into visual speech understanding is in its infancy. Sometimes called machine lip reading, work in understanding the visual speech channel, (which you can also think of as a sequence of lip gestures) is showing just how difficult this is, and how different visual speech is compared to acoustic speech.
In this module we will cover the latest work in this field. Understanding the challenges it provides, reviewing the latest results for recognition and synthesis (for animations) and how we are striving to exploit that which we know for application in new domains such as speech therapy and historical research.
This module requires proactive participation and independent study in addition to the contact time. Some prior knowledge of signal processing will help but is not essential, plus good mathematical skills. This module does not relate to processing specific medical images such as MRI’s, or x-rays. Rather we use image processing of videos to understand visual speech.
Some lip reading data sets are legally restricted to protect the identity and confidentiality of the data subjects, therefore there is strict policy of no photographs or recordings permitted during lectures. Sharable lecture slides and other teaching materials will be provided.
The module is structured to be 4-6 hours of contact time per week as a mixture of lectures, seminars, and practical tutorials. Additionally, the Module Organiser will commit to two hours every week where students can drop in without appointment. Appointments can be arranged for other times by email in advance.
Learning Outcomes
By the end of this module, students will gain;
- knowledge of the conventional lip-reading system and its parameters,
- understanding of conventional and cutting edge lip features - representation of small non-skeletal shapes, and
- Knowledge of the complex function between audio to visual speech.
- Students will understand how linguistics & speaker individuality affect the visual channel,
- gain practical skills in recognising and synthesizing realistic speech,
- methodological skills in unit and label selection for modelling visual speech gestures &,
- practical skills in audio speech recognition.
- Students will also obtain knowledge of expressive speech component variations and,
- understand how to recognise languages from visual speech channel alone.
Preliminary Lecture & Exercise Schedule