How Aligned Are Unimodal Language and Graph Encodings of Chemical Molecules?
| Authors | |
|---|---|
| Publication date | 2025 |
| Host editors |
|
| Book title | The 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics |
| Book subtitle | proceedings of the conference : IJCNLP-AACL 2025 : December 20-24, 2025 |
| ISBN (electronic) |
|
| Event | 14th International Joint Conference on Natural Language Processing and 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics |
| Volume | Issue number | 1 |
| Pages (from-to) | 1084-1097 |
| Number of pages | 14 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Chemical molecules can be represented as graphs or as language descriptions. Training unimodal models on graphs results in different encodings than training them on language. Therefore, the existing literature force-aligns the unimodal models during training to use them in downstream applications such as drug discovery. But to what extent are graph and language unimodal model representations inherently aligned, i.e., aligned prior to any force-alignment training? Knowing this is useful for a more expedient and effective forced-alignment. For the first time, we explore methods to gauge the alignment of graph and language unimodal models. We find compelling differences between models and their ability to represent slight structural differences without force-alignment. We also present an unified unimodal alignment (U2A) benchmark for gauging the inherent alignment between graph and language encoders which we make available with this paper.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://aclanthology.org/2025.ijcnlp-long.59/ |
| Downloads |
2025.ijcnlp-long.59
(Final published version)
|
| Permalink to this page | |
