Search results

    Filter results

  • Full text

  • Document type

  • Publication year

  • Organisation

Results: 9
Number of items: 9
  • Open Access
    Liao, B., Herold, C., Hashemi, S. H., Vasilev, S., Khadivi, S., & Monz, C. (2025). ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning. In W. Che, J. Nabende, E. Shutova, & M. T. Pilehvar (Eds.), The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) : Findings of the Association for Computational Linguistics: ACL 2025: ACL 2025 : July 27-August 1, 2025 (pp. 24779-24804). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-acl.1272
  • Open Access
    Vasilev, S., Herold, C., Liao, B., Hashemi, S. H., Khadivi, S., & Monz, C. (2025). Unilogit: Robust Machine Unlearning for LLMs Using Uniform-Target Self-Distillation. In W. Che, J. Nabende, E. Shutova, & M. T. Pilehvar (Eds.), The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) : Findings of the Association for Computational Linguistics: ACL 2025: ACL 2025 : July 27-August 1, 2025 (pp. 22453-22472). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-acl.1154
  • Open Access
    Liao, B., Herold, C., Khadivi, S., & Monz, C. (2024). ApiQ: Finetuning of 2-Bit Quantized Large Language Model. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), The 2024 Conference on Empirical Methods in Natural Language Processing : Proceedings of the Conference: EMNLP 2024 : November 12-16, 2024 (pp. 20996-21020). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.emnlp-main.1168
  • Open Access
    Chen, X., Liao, B., Qi, J., Eustratiadis, P., Monz, C., Bisazza, A., & de Rijke, M. (2024). The SIFo Benchmark: Investigating the Sequential Instruction Following Ability of Large Language Models. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), The 2024 Conference on Empirical Methods in Natural Language Processing : Findings of EMNLP 2024: EMNLP 2024 : November 12-16, 2024 (pp. 1691-1706). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-emnlp.92
  • Open Access
    Liao, B., Herold, C., Khadivi, S., & Monz, C. (2024). IKUN for WMT24 General MT Task: LLMs Are Here for Multilingual Machine Translation. In B. Haddow, T. Kocmi, P. Koehn, & C. Monz (Eds.), Ninth Conference on Machine Translation : Proceedings of the Conference: WMT 2024 : November 15-16, 2024 (pp. 263-269). Association for Computational Linguistics. https://doi.org/10.48550/arXiv.2408.11512, https://doi.org/10.18653/v1/2024.wmt-1.19
  • Open Access
    Liao, B., & Monz, C. (2023). Ask Language Model to Clean Your Noisy Translation Data. In H. Bouamor, J. Pino, & K. Bali (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2023: The 2023 Conference on Empirical Methods in Natural Language Processing (pp. 3215-3236). ACL. https://aclanthology.org/2023.findings-emnlp.212/
  • Open Access
    Liao, B., Meng, Y., & Monz, C. (2023). Parameter-Efficient Fine-Tuning without Introducing New Latency. In A. Rogers, J. Boyd-Graber, & N. Okazaki (Eds.), The 61st Conference of the Association for Computational Linguistics: Proceedings of the Conference : ACL 2023 : July 9-14, 2023 (Vol. 1, pp. 4242–4260). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-long.233
  • Open Access
    Liao, B., Tan, S., & Monz, C. (2023). Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning. In Thirty-seventh Annual Conference on Neural Information Processing Systems OpenReview. https://openreview.net/forum?id=J8McuwS3zY
  • Open Access
    Liao, B., Thulke, D., Hewavitharana, S., Ney, H., & Monz, C. (2022). Mask More and Mask Later: Efficient Pre-training of Masked Language Models by Disentangling the [MASK] Token. In Y. Goldberg, Z. Kozareva, & Y. Zhang (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2022: Conference on Empirical Methods in Natural Language Processing (EMNLP), Abu Dhabi, United Arab Emirates, 7-11 December 2022 (pp. 1478–1492). Association for Computational Linguistics. https://doi.org/10.48550/arXiv.2211.04898, https://doi.org/10.18653/v1/2022.findings-emnlp.106
Page of