![]() ![]() | Mazzocconi, C., El Haddad, K., O'brien, B., Bodur, K., & Fourtassi, A. (2025). Laughter Mimicry in Parent-Child and Parent-Adult interaction. Proceedings of the Multimodal Communication Symposium. ![]() |
![]() ![]() | Mazzocconi, C., O'brien, B., El Haddad, K., Goldwater, I., Anggoro, F., Hayes, B., & Ong, D. (2025). Differences between Mimicking and Non-Mimicking laughter in Child-Caregiver Conversation: A Distributional and Acoustic Analysis. Proceedings of the Annual Meeting of the Cognitive Science Society. ![]() |
Bohy, H., Tran, M., El Haddad, K., Dutoit, T., & Soleymani, M. (2024). Social-MAE: A Transformer-Based Multimodal Autoencoder for Face and Voice. In 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition, FG 2024 (pp. 1-5). New York City, United States: Institute of Electrical and Electronics Engineers Inc. doi:10.1109/FG59268.2024.10581940 ![]() |
![]() ![]() | Finet, A., & El Haddad, K. (10 October 2023). Trading en intraday : une analyse comportementale [Paper presentation]. Midi de la Recherche. |
Deffrennes, A., Vincent, L., Pivette, M., El Haddad, K., Bailey, J. D., Perusquia-Hernandez, M., Alarcão, S. M., & Dutoit, T. (2023). The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications. In ICMI 2023 Companion - Companion Publication of the 25th International Conference on Multimodal Interaction. Association for Computing Machinery. doi:10.1145/3610661.3617155 ![]() |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2021). Analysis and Assessment of Controllability of an Expressive Deep Learning-Based TTS System. Informatics. ![]() |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (25 November 2021). Analysis and Assessment of Controllability of an Expressive Deep Learning-Based TTS System. mdpi informatics, 8 (4). doi:10.3390/informatics8040084 ![]() |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (01 May 2021). ICE-Talk 2: Interface for Controllable Expressive TTS with perceptual assessment tool. Software Impacts, 8 (100055). doi:10.1016/j.simpa.2021.100055 ![]() |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2020). The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach. In Human 4.0 - From Biology to Cybernetic. IntechOpen. doi:10.5772/intechopen.89849 |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2020). Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning [Paper presentation]. Conference of the International Speech Communication Association, Shanghai, China. |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2020). ICE-Talk: an Interface for a Controllable Expressive Talking Machine [Paper presentation]. Conference of the International Speech Communication Association, Shanghai, China. |
![]() ![]() | El Haddad, K., & Dutoit, T. (2020). Cross-Corpora Study of Smiles and Laughter Mimicry in Dyadic Interactions [Paper presentation]. Interdisciplinary Workshop on laughter and other non-verbal vocalisations, Bielefeld, Germany. doi:10.4119/lw2020-926 |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2020). Neural Speech Synthesis with Style Intensity Interpolation: A Perceptual Analysis [Paper presentation]. IEEE/ACM International Conference on Human-Robot Interaction, Cambridge, United Kingdom. doi:10.1145/3371382.3378297 |
![]() ![]() | El Haddad, K., Tits, N., Velner, E., & Bohy, H. (2020). Cozmo4Resto: A Practical AI Application for Human-Robot Interaction [Paper presentation]. eNTERFACE Summer Workshop on Multimodal Interfaces, Ankara, Turkey. |
El Haddad, K., Zajega, F., & Dutoit, T. (2019). An Open-Source Avatar for Real-Time Human-Agent Interaction Applications [Paper presentation]. Affective Computing and Intelligent Interaction, . |
El Haddad, K., Nallan Chakravarthula, S., & Kennedy, J. (2019). Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles [Paper presentation]. International Conference on Multimodal Interaction, Suzhou, China. |
![]() ![]() | Tits, N., Wang, F., El Haddad, K., Pagel, V., & Dutoit, T. (2019). Visualization and Interpretation of Latent Spaces for Controlling Expressive Speech Synthesis through Audio Analysis [Paper presentation]. Conference of the International Speech Communication Association, Graz, Austria. doi:10.21437/Interspeech.2019-1426 |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2019). Exploring Transfer Learning for Low Resource Emotional TTS. In Intelligent Systems and Applications (pp. 52-60). Springer. doi:10.1007/978-3-030-29516-5_5 |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2019). Emotional Speech Datasets for English Speech Synthesis Purpose: A Review. In Intelligent Systems and Applications (pp. 61-66). Springer. |
![]() ![]() | El Haddad, K., Rizk, Y., Heron, L., Hajj, N., Zhao, Y., Kim, J., Ngo Trong, T., Lee, M., Doumit, M., Lin, P., Kim, Y., & Cakmak, H. (2018). End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions. Journal of Science and Technology of the Arts. ![]() |
Adaeze, A., Tits, N., El Haddad, K., Sarah, O., & Dutoit, T. (15 October 2018). The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation Systems [Paper presentation]. International Conference on Statistical Language and Speech Processing, Mons, Belgium. |
![]() ![]() | El Haddad, K., Tits, N., & Dutoit, T. (2018). Annotating Nonverbal Conversation Expressions in Interaction Datasets [Paper presentation]. Interdisciplinary Workshop on laughter and other non-verbal vocalisations in speech, Paris, France. |
![]() ![]() | El Haddad, K., Cakmak, H., & Dutoit, T. (2018). On Laughter Intensity Level: Analysis and Estimation [Paper presentation]. Interdisciplinary Workshop on laughter and other non-verbal vocalisations in speech, Paris, France. |
![]() ![]() | Tits, N., El Haddad, K., & Dutoit, T. (2018). ASR-based Features for Emotion Recognition: A Transfer Learning Approach [Paper presentation]. Grand Challenge and Workshop on Human Multimodal Language, Melbourne, Australia. |
![]() ![]() | Devillers, L., Rosset, S., Dubuisson Duplessis, G., Bechade, L., Yemez, Y., Turker, B. B., Sezgin, M., El Haddad, K., Dupont, S., Deléglise, P., Estève, Y., Lailler, C., Gilmartin, E., & Campbell, N. (2018). Multifaceted Engagement in Social Interaction with a Machine: the JOKER Project [Paper presentation]. Workshop on Large-scale Emotion Recognition and Analysis, Xi'an, China. |
El Haddad, K., Heron, L., Kim, J., Lee, M., Dupont, S., Dutoit, T., & Truong, K. (2018). A Dyadic Conversation Dataset On Moral Emotions [Paper presentation]. Workshop on Large-scale Emotion Recognition and Analysis, Xi'an, China. |
![]() ![]() | Cakmak, H., El Haddad, K., Riche, N., Leroy, J., Marighetto, P., Turker, B. B., Khaki, H., Pulisci, R., Gilmartin, E., Haider, F., Cengiz, K., Sulir, M., Torre, I., Marzban, S., Yazici, R., Bagci, F. B., Gazi Kili, V., Sezer, H., & Yenge, S. B. (2018). EASA : Environment Aware Social Agent [Paper presentation]. Proceedings of the 10th International Summer Workshop on Multimodal Interfaces - eNTERFACE'15, Mons, Belgium. |
Bechade, L., El Haddad, K., Bourquin, J., Dupont, S., & Devillers, L. (2017). A Corpus for Experimental Study of Affect Bursts in Human-robot Interaction [Paper presentation]. 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, Glasgow, United Kingdom. |
Oertel, C., Jonell, P., El Haddad, K., Szekely, E., & Gustafson, J. (2017). Using crowd-sourcing for the design of listening agents: challenges and opportunities [Paper presentation]. 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, Glasgow, United Kingdom. |
El Haddad, K., Torre, I., Gilmartin, E., Cakmak, H., Dupont, S., Dutoit, T., & Campbell, N. (2017). Introducing AmuS: The Amused Speech Database [Paper presentation]. International Conference on Statistical Language and Speech Processing, Le Mans, France. |
El Haddad, K. (2017). Nonverbal conversation expressions processing for human-agent interactions [Paper presentation]. Affective Computing and Intelligent Interaction, San Antonio, United States - Texas. |
![]() ![]() | El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2017). Amused speech components analysis and classification: Towards an amusement arousal level assessment system. Computers and Electrical Engineering. ![]() |
![]() ![]() | El Haddad, K., Cakmak, H., Doumit, M., Pironkov, G., & Ayvaz, U. (2017). Social Communicative Events in Human Computer Interactions [Paper presentation]. Proceedings of the 11th International Summer Workshop on Multimodal Interfaces - eNTERFACE'16, Twente, Netherlands. |
![]() ![]() | El Haddad, K., Cakmak, H., Gilmartin, E., Dupont, S., & Dutoit, T. (2016). Towards a Listening Agent: A System Generating Audiovisual Laughs and Smiles to Show Interest [Paper presentation]. International Conference on Multimodal Interfaces, Tokyo, Japan. |
El Haddad, K., Cakmak, H., Sulir, M., Dupont, S., & Dutoit, T. (2016). Audio Affect Burst Synthesis: A Multilevel Synthesis System for Emotional Expressions [Paper presentation]. European Signal Processing Conference, Budapest, Hungary. |
Cakmak, H., El Haddad, K., & Pulisci, R. (2016). A real time OSC controlled agent for human machine interactions [Paper presentation]. Workshop on Artificial Companion WACAI, Brest, France. |
El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2016). Laughter and Smile Processing for Human-Computer Interactions [Paper presentation]. Workshop 'Just talking - casual talk among humans and machines' of LREC 2016, Portorož, Slovenia. |
El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2016). AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis [Paper presentation]. Tenth International Conference on Language Resources and Evaluation (LREC 2016), Portorož, Slovenia. |
El Haddad, K., Dupont, S., & Dutoit, T. (2016). Affect bursts generation - v1 - JOKER Deliverable 5.3. https://orbi.umons.ac.be/handle/20.500.12907/41861 |
El Haddad, K., Dupont, S., & Dutoit, T. (2016). Speech Synthesis - v1 - JOKER Deliverable 5.2. https://orbi.umons.ac.be/handle/20.500.12907/41860 |
El Haddad, K., Dupont, S., Cakmak, H., & Dutoit, T. (2015). Shaking and Speech-Smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals. IEEE Global Conference on Signal and Information Processing. ![]() |
El Haddad, K., Cakmak, H., Moinet, A., Dupont, S., & Dutoit, T. (2015). An HMM Approach for Synthesizing Amused Speech with a Controllable Intensity of Smile [Paper presentation]. IEEE International Symposium on Signal Processing and Information Technology, . |
El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Towards a Level Assessment System of Amusement in Speech Signals: Amused Speech Components Classification [Paper presentation]. IEEE International Symposium on Signal Processing and Information Technology, . |
Cakmak, H., El Haddad, K., & Dutoit, T. (2015). GMM-based Synchronization rules for HMM-based Audio-Visual laughter synthesis [Paper presentation]. 6th International Conference on Affective Computing and Intelligent Interaction (ACII 2015), Xi'an, China. |
Devillers, L., Rossetto, S., Dubuisson Duplessis, G., Sehili, M. A., Béchade, L., Delaborde, A., Gossart, C., Letard, V., Yang, F., Yemez, Y., T¨urker, B. B., Sezgin, M., El Haddad, K., Dupont, S., Luzzati, D., Estève, Y., Gilmartin, E., & Campbell, N. (2015). Multimodal Data Collection of Human-Robot Humorous Interactions in the JOKER Project [Paper presentation]. Affective Computing and Intelligent Interaction, . |
El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Breath and Repeat: An Attempt at Enhancing Speech-Laugh Synthesis Quality [Paper presentation]. European Signal Processing Conference, . |
El Haddad, K., Dupont, S., D'alessandro, N., & Dutoit, T. (2015). An HMM-based Speech-smile Synthesis System: An Approach for Amusement Synthesis [Paper presentation]. 3rd Intl Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE'15), Ljubljana, Slovenia. |
![]() ![]() | El Haddad, K., Dupont, S., Urbain, J., & Dutoit, T. (2015). Speech-Laughs: an HMM-based Approach for Amused Speech Synthesis. IEEE International Conference on Acoustics, Speech and Signal Processing. Proceedings. ![]() |
Cakmak, H., El Haddad, K., & Dutoit, T. (2015). Audio-visual laughter synthesis system [Paper presentation]. 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands. |
El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Towards a Speech Synthesis System with Controllable Amusement Levels [Paper presentation]. 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands. |
El Haddad, K., Moinet, A., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Using MAGE for Real Time Speech-Laugh Synthesis [Paper presentation]. 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands. |