Large Language Lab

This project aims to “get under the hood” of large language models by investigating how they structure and produce historical narratives. Using methods such as circuit tracing, large-scale analysis of model behavior (including reinforcement learning), and analyses of token probabilities, we study the narrative mechanics of AI systems. We are also indexing and examining training datasets and data pipelines to uncover how historical representations become encoded in model parameters. Ultimately, we are asking: What are the latent histories and system mechanics that shape what AI "knows" and how it knows it?