Simon Ostermann

DFKI

Seminar: Recent Advances in Mechanistic Interpretability

CHANGED: Thursday 16:15–17:45, Room "Leibniz" at DFKI

Dr. Simon Ostermann, Natalia Skachkova
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)

Prerequisites

This seminar is primarily targeted at Master students, but is also open to advanced Bachelor students. We expect you to have a curious mind and advanced familiarity with large language models. At the very least, we expect all students to have read (and understood :-)) the BERT paper and the Transformer paper.

Seminar Content

The rise of deep learning in AI has dramatically increased the performance of models across many sub-fields such as natural language processing or computer vision. In the last 5 years, large pretrained language models (LLMs) and their variants (BERT, ChatGPT etc.) have changed the NLP landscape drastically. Such models got larger and larger over the last years, reaching increasingly impressive performance peeks, sometimes even surpassing humans.

A central issue with deep learning models with millions or billions of parameters is that they are essentially black boxes: From the model's parameters, it is not inherently clear why a model exhibits a certain behavior or makes certain classification decision. The rapidly growing field of interpretable and explainable AI (XAI) develops methods to peek into the black box that LLMs are, trying to understand the inner workings of such large models.

In this seminar we will investiagte a subfield in XAI, namely Mechanistic Interpretability (MI). MI aims to understand and explain the internal workings of complex machine learning models, in particular deep neural networks, by "reverse-engineering" internal mechanisms and computations inside a network's parameters. This involves analysing how certain mechanisms in the model contribute to decisions and results. The aim is to break down the ‘black box’ nature of these models and reveal the underlying calculations and internal representations that lead to predictions or behaviours.

We will concentrate on a range of basic methods in MI and then focus on their applications within natural language processing.

List of relevant Papers and Topics (subject to changes)

Mechanistic Interpretability Methods

Findings of Mechanistic Interpretability

Model Editing Editing, Knowledge Location and Extraction

Sparse Autoencoders and Monosemanticity

Some words on grading: This seminar is meant to be as interactive as possible. Final grades will be based on students' presentations, term papers (optional), but also on participation and discussion in class.

The participants are expected to prepare for classes accordingly, by reading the relevant papers and also doing background reading, if necessary. Based on this preparation, the participants should be able to discuss the presented papers in depth and to understand relevant context during the discussion.

Back to Main Page