>
Postdoctoral Researcher
Interdisciplinary Study of Human Interaction
I hold a PhD in Linguistics, specializing in Computer Science, from Aix-Marseille Université. My doctoral research was supervised by Dr. Philippe Blache, Dr. Magalie Ochs, Dr. Roxane Bertrand, and Dr. Stéphane Rauzy. I also collaborated with Dr. Louis-Philippe Morency during a research stay at Carnegie Mellon University. I was funded by the ILCB and affiliated with both the Laboratoire Parole et Langage (UMR 7309) and the Laboratoire d'Informatique et Systèmes (UMR 7020).
My research adopts an interdisciplinary approach to the study of human-to-human and human-computer interactions. During my PhD, I focused on conversational feedback and its role in communication. Currently, I am interested in exploring individual variability and adaptability in conversation, particularly from the perspective of speaker style and alignment mechanisms, integrating a multimodal and interdisciplinary approach.
My expertise includes:
I would be delight to connect with anyone interested in discussing my research or exploring collaboration opportunities. Please don't hesitate to reach out to me at auriane.boudin@univ-amu.fr
1. Boudin, A., Rauzy, S., Bertrand, R., Ochs, M., & Blache, P., Morency, LP. Exploring Listener Variability: A Study of Listening Styles and Alignment in Spontaneous Conversations. In preparation.
2. Boudin, A., Rauzy, S., Bertrand, R., Ochs, M., & Blache, P. How is Your Feedback Perceived? An Experimental Study of Anticipated and Delayed Conversational Feedback. In press.
3. Boudin, A., Bertrand, R., Rauzy, S., Ochs, M., & Blache, P. (2024). A Multimodal Model for Predicting Feedback Position and Type During Conversation. Speech Communication, 103066. (access)
4. Pellet-Rostaing, A., Bertrand, R., Boudin, A., Rauzy, S., & Blache, P. (2023). A multimodal approach for modeling engagement in conversation. Frontiers in Computer Science, 5, 1062342. (access)
5. Ochs, M., Pergandi, J. M., Ghio, A., André, C., Sainton, P., Ayad, E., Boudin, A., & Bertrand, R. (2023). A forum theater corpus for discrimination awareness. Frontiers in Computer Science, 5, 1081586. (access)
6. Amoyal, M., Bertrand, R., Bigi, B., Boudin, A., Meunier, C., Pallaud, B., Priego-Valverde, B., Rauzy, S., & Tellier, M. (2022). Principes et outils pour l’annotation des corpus. Travaux Interdisciplinaires sur la Parole et le Langage, 38. (access)
1. Kebe, G. Y., Birlikci, M. D., Boudin, A., A., Ishii, R., Girard, J. M., & Morency, LP. GeSTICS: A Multimodal Corpus for Studying Gesture Synthesis in Two-party Interactions with Contextualized Speech. Under review.
2. Boudin, A., Rauzy, S., Bertrand, R., Ochs, M., & Blache, P.(2024). The Distracted Ear: How Listeners Shape Conversational Dynamics. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 15872-15887).
3. Boudin, A., Bertrand, R., Rauzy, S., Houlès, M., Legou, T., Ochs, M., & Blache, P. (2023, October). SMYLE : A new multimodal resource of talk-in-interaction including neuro-physiological signal. In Companion Publication of the 25th International Conference on Multimodal Interaction (pp. 344-352). (access)
4. Boudin, A., Bertrand, R., Ochs, M., Blache, P., & Rauzy, S. (2022). Are you Smiling When I am Speaking ?. In Proceedings of the Smiling and Laughter across Contexts and the Life-span Workshop@ LREC2022. (access)
5. Boudin, A., (2022). Interdisciplinary corpus-based approach for exploring multimodal conversational feedback. In Proceedings of the 2022 International Conference on Multimodal Interaction (pp. 705-710). (access)
6. Boudin, A., Bertrand, R., Rauzy, S., Ochs, M., & Blache, P. (2021). A multimodal model for predicting conversational feedbacks. In International conference on text, speech, and dialogue (pp. 537-549). Springer, Cham. (access)