Search results
Results: 102
Number of items: 102
-
Pouw, C., Alishahi, A., & Zuidema, W. (2025). A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity. In G. Boleda, & M. Roth (Eds.), The 29th Conference on Computational Natural Language Learning (CoNLL 2025) : Proceedings of the Conference: CoNLL 2025 : July 31-August 1, 2025 (pp. 126-140). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.conll-1.9 -
van Sprang, A., Acar, E., & Zuidema, W. (2024). Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework. (v1 ed.) ArXiv. https://doi.org/10.48550/arXiv.2410.06070 -
van der Wal, O., Bachmann, D., Leidinger, A., van Maanen, L., Zuidema, W., & Schulz, K. (2024). Undesirable Biases in NLP: Addressing Challenges of Measurement. Journal of Artificial Intelligence Research, 79, 1-40. https://doi.org/10.1613/jair.1.15195 -
de Heer Kloots, M., & Zuidema, W. (2024). Human-like Linguistic Biases in Neural Speech Models: Phonetic Categorization and Phonotactic Constraints in Wav2Vec2.0. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 25, 4593-4597. https://doi.org/10.21437/Interspeech.2024-2490 -
Mohebbi, H., Jumelet, J., Hanna, M., Alishahi, A., & Zuidema, W. (2024). Transformer-specific Interpretability. In M. Mesgar, & S. LoƔiciga (Eds.), The 18th Conference of the European Chapter of the Association for Computational Linguistics : Proceedings of Tutorial Abstracts: EACL : March 21, 2024 (pp. 21-26). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.eacl-tutorials.4 -
Jumelet, J., Zuidema, W., & Sinclair, A. (2024). Do Language Models Exhibit Human-like Structural Priming Effects? In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024: ACL 2024 : August 11-16, 2024 (pp. 14727-14742). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-acl.877 -
Langedijk, A., Mohebbi, H., Sarti, G., Zuidema, W., & Jumelet, J. (2024). DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers. In K. Duh, H. Gomez, & S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024: Findings: Findings 2024 : June 16-21, 2024 (pp. 4764-4780). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-naacl.296 -
Bachmann, D., van der Wal, O., Chvojka, E., Zuidema, W. H., van Maanen, L., & Schulz, K. (2024). fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement. Minds and Machines, 34(4), Article 37. https://doi.org/10.1007/s11023-024-09695-9
Page 1 of 11