Hey, I’m Anej.1

I’m a third-year PhD fellow at the ETH AI Center, working at the intersection of formal language theory and modern language models. I try to understand what neural networks like transformers can (and can’t) do—what problems they can solve, what aspects of language they capture, and whether they can actually “reason”. You can find my research here.

Since the Summer of 2025, I’m also a student researcher at the Allen Institute for AI (Ai2), where I work with Ashish Sabharwal on reasoning and problem-solving in language models.

I’m co-advised by Prof. Ryan Cotterell and Prof. Valentina Boeva. Before my PhD, I did a master’s in data science at ETH Zürich and a bachelor’s in computer science & mathematics at the University of Ljubljana. If you’re curious, my full CV is here.

I also co-organize the Formal Languages and Neural Networks (FLaNN) Seminar.

News & Upcoming

  • December 2025: Giving a talk at the NeurIPS 2025 Workshop on Principles of Generative Modeling.
  • July 2025: Organizing a tutorial on The Underlying Logic of Language Models at ICML 2025.
  • August 2024: Organizing a tutorial on Computational Expressivity of Neural Language Models at ACL 2024.
  • July 2023: Lectured a course on Language Models and Formal Language Theory at ESSLLI 2023.

Outside of Research

I like reading, cooking, running, and hiking. I also like spend an unreasonable amount of time on aquascaping—the art of designing underwater landscapes. It’s niche, but a lot of fun.

Recent Publications

  1. The easiest way is to imagine saying “an a” in American English. Not perfect, but close enough.