Labs

Lab 1 – Explaining Neural Language Models from Internal Representations to Model Predictions

Alessio Miaschi, Gabriele Sarti

Abstract: As language models become increasingly complex and sophisticated, the processes leading to their predictions are growing increasingly difficult to understand. Research in NLP interpretability focuses on explaining the rationales driving model predictions and is crucial for building trust and transparency in the usage of these systems in real-world scenarios.

In this laboratory, we will explore various techniques for analyzing Neural Language Models, such as feature attribution methods and diagnostic classifiers. Besides common approaches to inspect models’ internal representations, we will also introduce prompting techniques to elicit model responses and motivate their usage as alternative methods for the behavioral study of model generations.

Alessio Miaschi

Alessio Miaschi
Institute for Computational Linguistics “A. Zampolli” (CNR-ILC, Pisa)

Mail: alessio.miaschi@ilc.cnr.it

Bio: Alessio Miaschi is a Post Doctoral Researcher at the ItaliaNLP Lab from the Institute for Computational Linguistics “A. Zampolli” (CNR-ILC, Pisa). He received his Ph.D. in Computer Science from the University of Pisa in 2022 with a thesis focused on the definition of techniques for interpreting and understanding the linguistic knowledge implicitly encoded in recent state-of-the-art Neural Language Models. His current research interest mainly focuses on the development and analysis of neural network models for language processing, as well as on the definition of NLP tools for educational applications.

Gabriele Sarti

Gabriele Sarti
University of Groningen – Netherlands

Mail: g.sarti@rug.nl

Bio: Gabriele Sarti is a Ph.D. student in the Computational Linguistics Group (GroNLP) at the University of Groningen, Netherlands. Previously, he worked as a research intern at Amazon Translate NYC, a research scientist at Aindo, and a research assistant at the ItaliaNLP Lab (CNR-ILC, Pisa). His research aims to improve our understanding of generative neural language models’ inner workings, with the ultimate goal of enhancing the controllability and robustness of these systems in human-AI interactions.

Lab 2 – Exploring Multi-Modal Neural Models and their applications

Lucia C. Passaro, Alessandro Bondielli

Abstract: Despite the recent advances in Neural Language models after the introduction of the Transformer architecture, perception and grounding cannot be fully addressed without connecting models to real-world data and knowledge, being intrinsically multi-modal. A clear research trend emerges from the recent release of new Multi-Modal Large Language Models capable of processing images and texts, as well as from the introduction of novel strategies for text-to-image generation with diffusion. In this laboratory, we aim to provide a theoretical and practical guide to the most prominent approaches to Multi-Modal Neural Language Models and the most relevant tasks involving their use.

Lucia C. Passaro

Lucia Passaro
University of Pisa – Italy

Mail: lucia.passaro@unipi.it

Bio: Lucia C. Passaro (https://luciacpassaro.github.io/) is an Assistant Professor at the Department of Computer Science of the University of Pisa. She obtained her PhD in Information Engineering in 2017 at University of Pisa. Currently, her research efforts are focused on Natural Language Processing. More in detail, she is working on Neural Language Models for Fact-checking and Fake News Detection, Multimodality, and Information Extraction. Lucia is a member of the Computational Intelligence and Machine Learning Group, the Pervasive AI Lab and the Computational Liguistics Laboratory.

Alessandro Bondielli

Alessandro Bondielli
University of Pisa – Italy

Mail: alessandro.bondielli@unipi.it

Bio: Alessandro Bondielli is Assistant Professor at the Department of Computer Science and the Department of Philology, Literature and Linguistics of the University of Pisa. He is a member of the Computational Liguistics Laboratory (CoLingLab) at the University of Pisa.
He received his Ph.D in Smart Computing from the University of Florence in 2021. During the Ph.D his research efforts focused on the use of Neural Language Models for document categorization and fake news detection and fact-checking.
Currently, he is interested in theory, applications, and limitations of multi-modal neural models.