Hey, I’m Anej.1
I’m a third-year PhD fellow at the ETH AI Center, working at the intersection of formal language theory and modern language models. I try to understand what neural networks like transformers can (and can’t) do—what problems they can solve, what aspects of language they capture, and whether they can actually “reason”. You can find my research here.
I’m co-advised by Prof. Ryan Cotterell and Prof. Valentina Boeva. Before my PhD, I did a master’s in data science at ETH Zürich and a bachelor’s in computer science & mathematics at the University of Ljubljana. If you’re curious, my full CV is here.
I also co-organize the Formal Languages and Neural Networks (FLaNN) Seminar.
In the Summer of 2025, I am interning at the Allen Institute for AI (Ai2) where I’m working Ashish Sabharwal.
Outside of Research
I like reading, cooking, running, and hiking. I also like spend an unreasonable amount of time on aquascaping—the art of designing underwater landscapes. It’s niche, but a lot of fun.
Recent Publications
A Probability-Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors, EMNLP 2024
On Efficiently Representing Regular Languages as RNNs, ACL 2024 Findings
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning, ACL 2024
On Affine Homotopy between Language Encoders, NeurIPS 2024
The easiest way is to imagine saying “an a” in American English. Not perfect, but close enough. ↩