CRTI - Centre de Recherche en Technologie de l'Information
Disciplines :
Library & information sciences
Author, co-author :
Urbain, Jérôme ; Université de Mons > Faculté Polytechnique > Information, Signal et Intelligence artificielle
Cakmak, Huseyin ; Université de Mons > Faculté Polytechnique > Information, Signal et Intelligence artificielle
Charlier, Aurélie
Denti, Maxime
Dutoit, Thierry ; Université de Mons > Faculté Polytechnique > Information, Signal et Intelligence artificielle
Dupont, Stéphane ; Université de Mons > Faculté Polytechnique > Information, Signal et Intelligence artificielle
Language :
English
Title :
Arousal-Driven Synthesis of Laughter
Publication date :
01 April 2014
Journal title :
IEEE Journal of Selected Topics in Signal Processing
ISSN :
1932-4553
Publisher :
Institute of Electrical and Electronics Engineers, New York, United States - New York
Peer reviewed :
Peer Reviewed verified by ORBi
Research unit :
F105 - Information, Signal et Intelligence artificielle
Research institute :
R300 - Institut de Recherche en Technologies de l'Information et Sciences de l'Informatique R450 - Institut NUMEDIART pour les Technologies des Arts Numériques
J. Urbain, H. Cakmak, and T. Dutoit, "Evaluation of HMM-based laughter synthesis," in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Vancouver, BC, Canada, May 2013.
P. J. Glenn, Laughter in Interaction. Cambridge, U.K.: Cambridge Univ. Press, 2003.
W. Ruch and P. Ekman, "The expressive pattern of laughter," in Emotion, Qualia and Consciousness, A. Kaszniak, Ed. Tokyo, Japan: World Scientific, 2001, pp. 426-443.
J.-A. Bachorowski, M. J. Smoski, and M. J. Owren, "The acoustic features of human laughter," J. Acoust. Soc. Amer., vol. 110, no. 3, pp. 1581-1597, Sep. 2001.
W. Chafe, The Importance of not Being Earnest. The Feeling Behind Laughter and Humor, ser. Consciousness & Emotion Book Series. Amsterdam, The Netherlands: John Benjamins, 2007, vol. 3, paperback 2009 ed.
M. S. Edmonson, "Notes on laughter," Anthropol. Linguist., pp. 23-34, 1987.
W. Ruch, "Exhilaration and humor," Handbook of Emotions, vol. 1, pp. 605-616, 1993.
S. Sundaram and S. Narayanan, "Automatic acoustic synthesis of human-like laughter," J. Acoust. Soc. Amer., vol. 121, no. 1, pp. 527-535, Jan. 2007. (Pubitemid 46102798)
E. Lasarcyk and J. Trouvain, "Imitating conversational laughter with an articulatory speech synthesis," in Proc. Interdisciplinary Workshop Phon. Laugh., Saarbrücken, Germany, Aug. 2007, pp. 43-48.
J. Trouvain, "Segmenting phonetic units in laughter," in Proc. 15th Int. Congr. Phon. Sci., Barcelona, Spain, Aug. 2003, pp. 2793-2796.
J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner, "The AVLaugh-terCycle database," in Proc. 7th Conf. Int. Lang. Resources Eval. (LREC'10), Valletta, Malta, May 2010.
J. Urbain and T. Dutoit, "A phonetic analysis of natural laughter, for use in automatic laughter processing systems," in Proc. 4th Bi-Annu. Int. Conf. HUMAINE Assoc. Affective Comput. Intell. Interact. (ACII'11), Memphis, TN, Oct. 2011, pp. 397-406.
P. Ladefoged, "A course in phonetics," Jan. 20, 2011 [Online]. Available: http://hctv.humnet.ucla.edu/departments/linguistics/Vowel- sandConsonants/course/chapter1/chapter1.html
W. F. Ruch, J. Hofmann, and T. Platt, "Investigating facial features of four types of laughter in historic illustrations," Eur. J. Humour Res., vol. 1, no. 1, pp. 99-118, 2013.
R. Niewiadomski, J. Urbain, C. Pelachaud, and T. Dutoit, "Finding out the audio and visual features that influence the perception of laughter intensity and differ in inhalation and exhalation phases," in Proc. ES '12 4th Int. Workshop Corpora for Research on Emotion, Sentiment, Social Signals, Satellite of LREC '12, Istanbul, Turkey, May 2012.
T. Drugman, J. Urbain, N. Bauwens, R. Chessini, C. Valderrama, P. Lebecque, and T. Dutoit, "Objective study of sensor relevance for automatic cough detection," IEEE Trans. Inf. Technol. BioMed., vol. 7, no. 3, pp. 699-707, May 2013.
G. Peeters, "A large set of audio features for sound description (similarity and classification) in the CUIDADO Project," Inst. de Recherche et Coordination Acoustique/Musique (IRCAM), 2004, Tech. Rep.
T. Drugman, T. Dubuisson, and T. Dutoit, "Phase-based information for voice pathology detection," in Proc. '11 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), 2011, pp. 4612-4615.
D. P. Ellis and G. E. Poliner, "Identifying cover songs' with chroma features and dynamic programming beat tracking," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '07), 2007, vol. 4, pp. 1429-1432.
T. Drugman and A. Alwan, "Joint robust voicing detection and pitch estimation based on residual harmonics," in Proc. Interspeech '11, Florence, Italy, Aug. 2011.
D. Talkin, "A robust algorithm for pitch tracking (RAPT)," Speech Coding Synth., vol. 495, p. 518, 1995.
K. Sjölander, The Snack sound toolkit [computer program]. 2004 [Online]. Available: http://www.speech.kth.se/snack, Retrieved February 10, 2011
M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten, "The WEKA data mining software: An update," ACM SIGKDD Explorations Newslett., vol. 11, no. 1, pp. 10-18, 2009.
M. A. Hall, "Correlation-based feature subset selection for machine learning," Ph.D. dissertation, Univ. of Waikato, Hamilton, New Zealand, 1998.
T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Ki-tamura, "Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis," in Proc. Eurospeech, Budapest, Hungary, 1999.
K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura, "Speech parameter generation algorithms for HMM-based speech synthesis," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '00), 2000, vol. 3, pp. 1315-1318.
K. Oura, HMM-based speech synthesis system (HTS) [computer program webpage]. 2011 [Online]. Available: http://hts.sp.nitech.ac.jp/, consulted on June 22
S. Young and S. Young, "The HTK hidden Markov model toolkit: Design and philosophy," Entropic Cambridge Research Laboratory, Ltd., 1994.
H. Kawahara, "STRAIGHT, exploitation of the other aspect of vocoder: Perceptually isomorphic decomposition of speech sounds," Acoust. Sci.Technol., vol. 27, no. 6, pp. 349-353, 2006. (Pubitemid 44728687)
T. Drugman, G. Wilfart, and T. Dutoit, "A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis," in Proc. Interspeech, 2009, pp. 1779-1782.
H. Zen, T. Nose, J. Yamagishi, S. Sako, T. Masuko, A. Black, and K. Tokuda, "The HMM-based speech synthesis system (HTS) version 2.0," in Proc. 6th ISCA Workshop Speech Synth., 2007, pp. 294-299.
H. Zen, "An example of context-dependent label format for HMM-based speech synthesis in English," HTS CMUARCTIC Demo, 2006.
A. Hunt and A. Black, "Unit selection in a concatenative speech synthesis system using a large speech database," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Atlanta, GA, 1996, pp. 373-376.
B.-J. Hsu and J. Glass, "Iterative language model estimation: Efficient data structure & algorithms," in Proc. Interspeech, 2008, vol. 8, pp. 1-4.
M. Astrinaki, N. D'Alessandro, L. Reboursière, A. Moinet, and T. Du-toit, "Mage 2.0: New features and its application in the development of a talking guitar," in Proc. 13th Conf. New Interfaces for Musical Expression (NIME'13), Daejon and Seoul, Korea, 2013.
J. Yamagishi, T. Nose, H. Zen, Z.-H. Ling, T. Toda, K. Tokuda, S. King, and S. Renals, "Robust speaker-adaptive HMM-based text-to-speech synthesis," IEEE Trans. Audio, Speech, Lang. Process., vol. 17, no. 6, pp. 1208-1230, Aug. 2009.