Profil

El Haddad Kevin

Université de Mons - UMONS > Faculté Warocqué d'Economie et de Gestion > Service de Management financier et Dynamiques territoriales

Université de Mons - UMONS > Faculté Polytechnique > Service Information, Signal et Intelligence artificielle

Université de Mons - UMONS > Faculté Polytechnique > Information, Signal et Intelligence artificielle


Main Referenced Co-authors
Dutoit, Thierry  (35)
Dupont, Stéphane  (20)
Cakmak, Huseyin  (15)
Tits, Noé  (14)
Gilmartin, Emer (5)
Main Referenced Unit & Research Centers
CRTI - Centre de Recherche en Technologie de l'Information (34)
Main Referenced Disciplines
Library & information sciences (46)
Computer science (14)

Publications (total 46)

The most downloaded
355 downloads
Cakmak, H., El Haddad, K., Riche, N., Leroy, J., Marighetto, P., Turker, B. B., Khaki, H., Pulisci, R., Gilmartin, E., Haider, F., Cengiz, K., Sulir, M., Torre, I., Marzban, S., Yazici, R., Bagci, F. B., Gazi Kili, V., Sezer, H., & Yenge, S. B. (2018). EASA : Environment Aware Social Agent. Paper presented at Proceedings of the 10th International Summer Workshop on Multimodal Interfaces - eNTERFACE'15, Mons, Belgium. https://hdl.handle.net/20.500.12907/42107
The most cited
28 citations (Scopus®)
Devillers, L., Rossetto, S., Dubuisson Duplessis, G., Sehili, M. A., Béchade, L., Delaborde, A., Gossart, C., Letard, V., Yang, F., Yemez, Y., T¨urker, B. B., Sezgin, M., El Haddad, K., Dupont, S., Luzzati, D., Estève, Y., Gilmartin, E., & Campbell, N. (2015). Multimodal Data Collection of Human-Robot Humorous Interactions in the JOKER Project. Paper presented at Affective Computing and Intelligent Interaction, . https://hdl.handle.net/20.500.12907/41750

Tits, N., El Haddad, K., & Dutoit, T. (25 November 2021). Analysis and Assessment of Controllability of an Expressive Deep Learning-Based TTS System. mdpi informatics, 8 (4). doi:10.3390/informatics8040084
Peer reviewed

Tits, N., El Haddad, K., & Dutoit, T. (2021). Analysis and Assessment of Controllability of an Expressive Deep Learning-Based TTS System. Informatics.
Peer reviewed

Tits, N., El Haddad, K., & Dutoit, T. (01 May 2021). ICE-Talk 2: Interface for Controllable Expressive TTS with perceptual assessment tool. Software Impacts, 8 (100055). doi:10.1016/j.simpa.2021.100055
Peer reviewed

Tits, N., El Haddad, K., & Dutoit, T. (2020). The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach. In Human 4.0 - From Biology to Cybernetic. IntechOpen. doi:10.5772/intechopen.89849

Tits, N., El Haddad, K., & Dutoit, T. (2020). Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning. Paper presented at Conference of the International Speech Communication Association, Shanghai, China.

Tits, N., El Haddad, K., & Dutoit, T. (2020). ICE-Talk: an Interface for a Controllable Expressive Talking Machine. Paper presented at Conference of the International Speech Communication Association, Shanghai, China.

El Haddad, K., & Dutoit, T. (2020). Cross-Corpora Study of Smiles and Laughter Mimicry in Dyadic Interactions. Paper presented at Interdisciplinary Workshop on laughter and other non-verbal vocalisations, Bielefeld, Germany. doi:10.4119/lw2020-926

Tits, N., El Haddad, K., & Dutoit, T. (2020). Neural Speech Synthesis with Style Intensity Interpolation: A Perceptual Analysis. Paper presented at IEEE/ACM International Conference on Human-Robot Interaction, Cambridge, United Kingdom. doi:10.1145/3371382.3378297

El Haddad, K., Tits, N., Velner, E., & Bohy, H. (2020). Cozmo4Resto: A Practical AI Application for Human-Robot Interaction. Paper presented at eNTERFACE Summer Workshop on Multimodal Interfaces, Ankara, Turkey.

El Haddad, K., Zajega, F., & Dutoit, T. (2019). An Open-Source Avatar for Real-Time Human-Agent Interaction Applications. Paper presented at Affective Computing and Intelligent Interaction, .

El Haddad, K., Nallan Chakravarthula, S., & Kennedy, J. (2019). Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles. Paper presented at International Conference on Multimodal Interaction, Suzhou, China.

Tits, N., Wang, F., El Haddad, K., Pagel, V., & Dutoit, T. (2019). Visualization and Interpretation of Latent Spaces for Controlling Expressive Speech Synthesis through Audio Analysis. Paper presented at Conference of the International Speech Communication Association, Graz, Austria. doi:10.21437/Interspeech.2019-1426

Tits, N., El Haddad, K., & Dutoit, T. (2019). Exploring Transfer Learning for Low Resource Emotional TTS. In Intelligent Systems and Applications (pp. 52-60). Springer. doi:10.1007/978-3-030-29516-5_5

Tits, N., El Haddad, K., & Dutoit, T. (2019). Emotional Speech Datasets for English Speech Synthesis Purpose: A Review. In Intelligent Systems and Applications (pp. 61-66). Springer.

El Haddad, K., Rizk, Y., Heron, L., Hajj, N., Zhao, Y., Kim, J., Ngo Trong, T., Lee, M., Doumit, M., Lin, P., Kim, Y., & Cakmak, H. (2018). End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions. Journal of Science and Technology of the Arts.
Peer Reviewed verified by ORBi

Adaeze, A., Tits, N., El Haddad, K., Sarah, O., & Dutoit, T. (15 October 2018). The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation Systems. Paper presented at International Conference on Statistical Language and Speech Processing, Mons, Belgium.

El Haddad, K., Tits, N., & Dutoit, T. (2018). Annotating Nonverbal Conversation Expressions in Interaction Datasets. Paper presented at Interdisciplinary Workshop on laughter and other non-verbal vocalisations in speech, Paris, France.

El Haddad, K., Cakmak, H., & Dutoit, T. (2018). On Laughter Intensity Level: Analysis and Estimation. Paper presented at Interdisciplinary Workshop on laughter and other non-verbal vocalisations in speech, Paris, France.

Tits, N., El Haddad, K., & Dutoit, T. (2018). ASR-based Features for Emotion Recognition: A Transfer Learning Approach. Paper presented at Grand Challenge and Workshop on Human Multimodal Language, Melbourne, Australia.

Devillers, L., Rosset, S., Dubuisson Duplessis, G., Bechade, L., Yemez, Y., Turker, B. B., Sezgin, M., El Haddad, K., Dupont, S., Deléglise, P., Estève, Y., Lailler, C., Gilmartin, E., & Campbell, N. (2018). Multifaceted Engagement in Social Interaction with a Machine: the JOKER Project. Paper presented at Workshop on Large-scale Emotion Recognition and Analysis, Xi'an, China.

El Haddad, K., Heron, L., Kim, J., Lee, M., Dupont, S., Dutoit, T., & Truong, K. (2018). A Dyadic Conversation Dataset On Moral Emotions. Paper presented at Workshop on Large-scale Emotion Recognition and Analysis, Xi'an, China.

Cakmak, H., El Haddad, K., Riche, N., Leroy, J., Marighetto, P., Turker, B. B., Khaki, H., Pulisci, R., Gilmartin, E., Haider, F., Cengiz, K., Sulir, M., Torre, I., Marzban, S., Yazici, R., Bagci, F. B., Gazi Kili, V., Sezer, H., & Yenge, S. B. (2018). EASA : Environment Aware Social Agent. Paper presented at Proceedings of the 10th International Summer Workshop on Multimodal Interfaces - eNTERFACE'15, Mons, Belgium.

Oertel, C., Jonell, P., El Haddad, K., Szekely, E., & Gustafson, J. (2017). Using crowd-sourcing for the design of listening agents: challenges and opportunities. Paper presented at 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, Glasgow, United Kingdom.

Bechade, L., El Haddad, K., Bourquin, J., Dupont, S., & Devillers, L. (2017). A Corpus for Experimental Study of Affect Bursts in Human-robot Interaction. Paper presented at 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, Glasgow, United Kingdom.

El Haddad, K. (2017). Nonverbal conversation expressions processing for human-agent interactions. Paper presented at Affective Computing and Intelligent Interaction, San Antonio, United States - Texas.

El Haddad, K., Torre, I., Gilmartin, E., Cakmak, H., Dupont, S., Dutoit, T., & Campbell, N. (2017). Introducing AmuS: The Amused Speech Database. Paper presented at International Conference on Statistical Language and Speech Processing, Le Mans, France.

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2017). Amused speech components analysis and classification: Towards an amusement arousal level assessment system. Computers and Electrical Engineering.
Peer Reviewed verified by ORBi

El Haddad, K., Cakmak, H., Doumit, M., Pironkov, G., & Ayvaz, U. (2017). Social Communicative Events in Human Computer Interactions. Paper presented at Proceedings of the 11th International Summer Workshop on Multimodal Interfaces - eNTERFACE'16, Twente, Netherlands.

El Haddad, K., Cakmak, H., Gilmartin, E., Dupont, S., & Dutoit, T. (2016). Towards a Listening Agent: A System Generating Audiovisual Laughs and Smiles to Show Interest. Paper presented at International Conference on Multimodal Interfaces, Tokyo, Japan.

El Haddad, K., Cakmak, H., Sulir, M., Dupont, S., & Dutoit, T. (2016). Audio Affect Burst Synthesis: A Multilevel Synthesis System for Emotional Expressions. Paper presented at European Signal Processing Conference, Budapest, Hungary.

Cakmak, H., El Haddad, K., & Pulisci, R. (2016). A real time OSC controlled agent for human machine interactions. Paper presented at Workshop on Artificial Companion WACAI, Brest, France.

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2016). Laughter and Smile Processing for Human-Computer Interactions. Paper presented at Workshop 'Just talking - casual talk among humans and machines' of LREC 2016, Portorož, Slovenia.

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2016). AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis. Paper presented at Tenth International Conference on Language Resources and Evaluation (LREC 2016), Portorož, Slovenia.

El Haddad, K., Dupont, S., & Dutoit, T. (2016). Affect bursts generation - v1 - JOKER Deliverable 5.3.

El Haddad, K., Dupont, S., & Dutoit, T. (2016). Speech Synthesis - v1 - JOKER Deliverable 5.2.

El Haddad, K., Dupont, S., Cakmak, H., & Dutoit, T. (2015). Shaking and Speech-Smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals. IEEE Global Conference on Signal and Information Processing.
Peer reviewed

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Towards a Level Assessment System of Amusement in Speech Signals: Amused Speech Components Classification. Paper presented at IEEE International Symposium on Signal Processing and Information Technology, .

El Haddad, K., Cakmak, H., Moinet, A., Dupont, S., & Dutoit, T. (2015). An HMM Approach for Synthesizing Amused Speech with a Controllable Intensity of Smile. Paper presented at IEEE International Symposium on Signal Processing and Information Technology, .

Devillers, L., Rossetto, S., Dubuisson Duplessis, G., Sehili, M. A., Béchade, L., Delaborde, A., Gossart, C., Letard, V., Yang, F., Yemez, Y., T¨urker, B. B., Sezgin, M., El Haddad, K., Dupont, S., Luzzati, D., Estève, Y., Gilmartin, E., & Campbell, N. (2015). Multimodal Data Collection of Human-Robot Humorous Interactions in the JOKER Project. Paper presented at Affective Computing and Intelligent Interaction, .

Cakmak, H., El Haddad, K., & Dutoit, T. (2015). GMM-based Synchronization rules for HMM-based Audio-Visual laughter synthesis. Paper presented at 6th International Conference on Affective Computing and Intelligent Interaction (ACII 2015), Xi'an, China.

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Breath and Repeat: An Attempt at Enhancing Speech-Laugh Synthesis Quality. Paper presented at European Signal Processing Conference, .

El Haddad, K., Dupont, S., D'alessandro, N., & Dutoit, T. (2015). An HMM-based Speech-smile Synthesis System: An Approach for Amusement Synthesis. Paper presented at 3rd Intl Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE'15), Ljubljana, Slovenia.

El Haddad, K., Dupont, S., Urbain, J., & Dutoit, T. (2015). Speech-Laughs: an HMM-based Approach for Amused Speech Synthesis. IEEE International Conference on Acoustics, Speech and Signal Processing. Proceedings.
Peer reviewed

Cakmak, H., El Haddad, K., & Dutoit, T. (2015). Audio-visual laughter synthesis system. Paper presented at 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands.

El Haddad, K., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Towards a Speech Synthesis System with Controllable Amusement Levels. Paper presented at 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands.

El Haddad, K., Moinet, A., Cakmak, H., Dupont, S., & Dutoit, T. (2015). Using MAGE for Real Time Speech-Laugh Synthesis. Paper presented at 4th Interdisciplinary Workshop on Laughter and Other Non-Verbal Vocalisations in Speech, Enschede, Netherlands.

Contact ORBi