On the Representational Capacity of Recurrent Neural Language Models
Published in EMNLP 2023, 2023
This work investigates the computational expressivity of language models based on recurrent neural networks. We extend the Turing completeness result by Siegelmann and Sontag (1992) to the probabilistic case, showing how a rationally weighted RLM with unbounded computation time can simulate any probabilistic Turing machine (PTM).
Citation BibTeX
:
@inproceedings{nowak-etal-2023-representational,
title = "On the Representational Capacity of Recurrent Neural Language Models",
author = "Nowak, Franz and
Svete, Anej and
Du, Li and
Cotterell, Ryan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
}