This paper
presents an overview of the M3 (multi-biometric,multi-device and multilingual)
Corpus. M3 aims to support research in multi-biometric technologies for
pervasive computing using mobile devices. The corpus includes three biometrics –
facial images, speech and fingerprints; three devices – the desktop PC with
plug-in microphone and webcam, Pocket PC and 3G phone; as well as three languages
of geographical relevance in Hong Kong –Cantonese,Putonghua and English. The
multimodal user interface can readily extend from desktop computers to mobile
hand helds and smart phones which have small form factors. Multimodal biometric
authentication can also leverage the mutual complementarity among
modalities,which is particularly useful in dynamic environmental conditions
encountered in pervasive computing. For example, we should emphasize facial
images over speech when verification is performed in noisy acoustic
environments. M3 is designed to include variable environmental factors indoors
and outdoors, simultaneous recordings across multiple devices to support
comparative and contrastive investigations, bilingual text prompts to elicit
both application-oriented and cognitive speech data, as well as multi-session
data from a fairly large set of subjects.