Profil

Delbrouck Jean-Benoit

Université de Mons - UMONS > Faculté Polytechnique > Service Information, Signal et Intelligence artificielle

Main Referenced Co-authors
DUPONT, Stéphane  (12)
SEDDATI, Omar  (2)
TITS, Noé  (2)
Brousmiche, Mathilde  (1)
HUBENS, Nathan  (1)
Main Referenced Unit & Research Centers
CRTI - Centre de Recherche en Technologie de l'Information (9)
Main Referenced Disciplines
Library & information sciences (12)
Computer science (2)

Publications (total 12)

The most downloaded
77 downloads
Delbrouck, J.-B., & Dupont, S. (2017). Modulating and attending the source image during encoding improves Multimodal Translation [Paper presentation]. NIPS 2017 Workshop on Visually-Grounded Interaction and Language (ViGIL), Long Beach, United States - California. https://hdl.handle.net/20.500.12907/42056

The most cited

35 citations (Scopus®)

Delbrouck, J.-B., Tits, N., Brousmiche, M., & Dupont, S. (2020). A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis. In Second Grand Challenge and Workshop on Multimodal Language - ACL 2020 (2020). -. doi:10.18653/v1/2020.challengehml-1.1 https://hdl.handle.net/20.500.12907/42328

Delbrouck, J.-B., Tits, N., & Dupont, S. (2020). Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition. In NLP Beyond Text (NLPBT) - EMNLP 2020 (2020). -.
Peer reviewed

Delbrouck, J.-B., Tits, N., Brousmiche, M., & Dupont, S. (2020). A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis. In Second Grand Challenge and Workshop on Multimodal Language - ACL 2020 (2020). -. doi:10.18653/v1/2020.challengehml-1.1
Peer reviewed

Delbrouck, J.-B., & Dupont, S. (2019). Adversarial reconstruction for Multi-modal Machine Translation. ORBi UMONS-University of Mons. https://orbi.umons.ac.be/handle/20.500.12907/42275.

Delbrouck, J.-B., Maiorca, A., Hubens, N., & Dupont, S. (2019). Modulated Self-attention Convolutional Network for VQA. In NeurIPS 2019 Workshop on Visually-Grounded Interaction and Language (ViGIL) (2019). -.
Peer reviewed

Delbrouck, J.-B., & Dupont, S. (2018). Object-oriented Targets for Visual Navigation using Rich Semantic Representations. In NIPS 2018 Workshop on Visually-Grounded Interaction and Language (ViGIL). -.
Peer reviewed

Delbrouck, J.-B., & Dupont, S. (30 October 2018). UMONS Submission for WMT18 Multimodal Translation Task [Paper presentation]. Third Conference on Machine Translation, Brussels, Belgium.
Peer reviewed

Delbrouck, J.-B., & Dupont, S. (2018). Bringing back simplicity and lightliness into neural image captioning. ArXiv e-prints.
Peer reviewed

Delbrouck, J.-B., & Dupont, S. (2017). Modulating and attending the source image during encoding improves Multimodal Translation [Paper presentation]. NIPS 2017 Workshop on Visually-Grounded Interaction and Language (ViGIL), Long Beach, United States - California.

Delbrouck, J.-B., & Dupont, S. (2017). An empirical study on the effectiveness of images in Multimodal Neural Machine Translation [Paper presentation]. Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark.

Delbrouck, J.-B., Dupont, S., & Seddati, O. (2017). Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation [Paper presentation]. GLU 2017 International Workshop on Grounding Language Understanding, Stockholm, Sweden.

Seddati, O., Delbrouck, J.-B., Dupont, S., & Mahmoudi, S. (25 April 2017). Deep Features for Big Data [Poster presentation]. Journée scientifique du Pôle hainuyer 'Les données au coeur de notre devenir: les enjeux des big data, Tournai, e-campus, Belgium.

Delbrouck, J.-B., & Dupont, S. (2017). Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation. ArXiv e-prints.

Contact ORBi UMONS