Hey, I’m Anej.1
I’m a fourth-year PhD fellow at the ETH AI Center, studying language models with formal language theory to understand what they can (and can’t) do.
I’m co-advised by Prof. Ryan Cotterell and Prof. Valentina Boeva. In 2025, I did a 9-month research internship at the
Allen Institute for AI (Ai2), where I worked with Ashish Sabharwal and William Merrill on reasoning and problem-solving in language models. In 2026, I am visiting
Noah’s ARK lab at the University of Washington, working with Prof. Noah Smith.
I also co-organize the Formal Languages and Neural Networks (FLaNN) Seminar.
Research Interests
- Expressivity of neural networks: What formal languages can transformers, RNNs, linear RNNs, and hybrid models represent and learn?
- Reasoning in language models: What happens computationally when models “think step by step”? Can we design more efficient ways for models to think?
- Diffusion models for text and looped transformers: How can we leverage parallel computation to make language models faster and more efficient?
News & Upcoming
Selected Talks
Teaching
I like teaching! Highlights: Head TA for Large Language Models (~600 students, 25+ TAs) and Natural Language Processing (~300 students) at ETH; tutorials at ICML 2025, ACL 2024, and a summer school course at ESSLLI 2023.
Selected Publications
Outside of Research
I like reading, cooking, running, and hiking. I also spend an unreasonable amount of time on aquascaping—the art of designing underwater landscapes. It’s niche, but a lot of fun.
The easiest way is to imagine saying “an a” in American English. Not perfect, but close enough. ↩











