Aalto’s team participates in the ACM Multimedia 2022 Computational Paralinguistics ChallengE

The ACM Multimedia 2022 Computational Paralinguistics ChallengE (ComParE) is an open Grand Challenge dealing with states and traits of speakers as manifested in their speech signal’s properties and beyond. At the start of the competition, the data is provided by the organizers, and the Sub-Challenges are generally open for participation.

This year our team, consisting of PhD students (Yaroslav Getman and Dejan Porjazovski), Research Fellows (Tamás Grósz and Sudarsana Reddy Kadiri), and Professor Mikko Kurimo, embarked on tackling two Sub-Challenges: the Vocalisations and the Stuttering one. 

In the Stuttering Sub-Challenge, participants were tasked to develop a system that can recognize different kinds of stuttering (e.g. word/phrase repetition, prolongation, sound repetition and others). Stuttering is a complex speech disorder with a crude prevalence of about 1 % of the population. Monitoring of stuttering would allow objective feedback to persons who stutter (PWS) and speech therapists, thus facilitating tailored speech therapy, with the automatic detection of different stuttering phenomena as a necessary prerequisite. As training data, we could use the Kassel State of Fluency corpus containing approximately 5600 annotated samples. 

In the Vocalisations Sub-Challenge, non-verbal vocal expressions (such as laughter, cries, moans, and screams) from the Variably Intense Vocalizations of Affect and Emotion Corpus are used for classifying the expression of six different emotions. Such human non-verbals are still understudied but are ubiquitous in human communication. This task was extremely challenging because the training data contained only female voices, while the developed systems were evaluated on male sounds.

Our team developed solutions for both tasks using state-of-the-art models like wav2vec 2.0, data augmentation and other simple tricks based on the distributed training data. For technical details, see our paper:

Tamás Grósz, Dejan Porjazovski, Yaroslav Getman, Sudarsana Kadiri, and Mikko Kurimo. 2022. Wav2vec2-based Paralinguistic Systems to Recognise Vocalised Emotions and Stuttering. In Proceedings of the 30th ACM International Conference on Multimedia (MM ’22). Association for Computing Machinery, New York, NY, USA, 7026–7029. https://doi.org/10.1145/3503161.3551572

In total, 23 teams from all around the world registered for the competition, of which 8 submitted solutions for the Stuttering, and 11 for the Vocalisations Sub-Challenge. 

Aalto’s team won both competitions, earning two spaces in the hall of fame:


Teflon team presents at Interspeech

The 23rd INTERSPEECH Conference took place from September 18 to 22, 2022, at Songdo ConvensiA, in Incheon, Korea, under the theme Human and Humanizing Speech Technology. INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theory to advanced applications.

Truly a city of the future, Songdo sits adjacent to Seoul, regarded as one of the technology capitals of the world. The city’s underground railway already offers high-speed WiFi, with electronic panels at the exits and provides the waiting time for connecting to buses or trains, while companies like Samsung Electronics are already working on linking household devices to mobile phones. On the technological front, Songdo is a brand-new city that offers the chance to integrate innovation into daily life truly.

This year, the Teflon team submitted a paper titled “wav2vec2-based Speech Rating System for Children with Speech Sound Disorder” to Interspeech. The article described our initial systems developed using Sofia Strömbergsson’s corpus of children suffering from speech sound disorder. Speech therapies, which could aid these children in speech acquisition, greatly rely on speech practice trials and accurate feedback about their pronunciations. Our solutions could be the basis for software tools that would enable home therapy and lessen the burden on speech-language pathologists. Our submission was accepted with very positive reviews and selected for a poster presentation.

We (Tamás & Mikko) presented our poster on Wednesday, September 21, 13:30-15:30(KST). We were lucky enough to be placed right in front of the main entrance, resulting in many people stopping at our stand to check the poster.

We had several very intriguing conversations and gained some valuable ideas and suggestions from our colleagues, which we will explore in the future. After a fruitful poster session, we let some steam off during the gala banquet, where we had the chance to sample Korean cuisine and listen to some authentic K-POP music.



Getman, Y., Al-Ghezi, R., Voskoboinik, K., Grósz, T., Kurimo, M., Salvi, G., Svendsen, T., Strömbergsson, S. (2022) wav2vec2-based Speech Rating System for Children with Speech Sound Disorder. Proc. Interspeech 2022, 3618-3622, doi: 10.21437/Interspeech.2022-10103

The first face-to-face Teflon meeting in Helsinki, September 2022

In 5-6 September, 15 researchers of Aalto University, Tampere University, Karolinska Institutet, University of Oslo and NTNU (Trondheim) gathered in the Campus of Aalto University for the first face-to-face meeting of the Teflon project. The project has been running already for almost 1.5 years, but due to the pandemic, our kick-off and all other meetings have been only virtual. Actually 4 of us still had to participate remotely due to sudden Covid-19 cases in NTNU’s team etc, but for the rest this was a really delightful experience to meet and have in-depth discussions of the project, science, technology and everything else.

We had two full days including discussion sessions about the data, evaluations, automatic speech recognition, game design, automatic and human pronunciation assessment, experiment design, publications, dissemination and project management. Because we were still in the early stages of building the children’s pronunciation game, collecting and annotating the data and training the automatic assessment, the focus was clearly on planning the next steps of the project.  On Monday evening we continued after the project to have dinner in downtown Helsinki and enjoy the good company and delicious food in the restaurant Emo.

The next steps in the project include finishing the game codes, developing multitask systems and faster speech processing servers, repeating the previously run tests on the new Finnish, Swedish and Norwegian data, finishing the human assessments for these data, fix word lists and other specs for the game for each language, and recruiting speakers for the remaining training data.