Extending CLIP for Category-to-Image Retrieval in E-Commerce
| Authors |
|
|---|---|
| Publication date | 2022 |
| Host editors |
|
| Book title | Advances in Information Retrieval |
| Book subtitle | 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10–14, 2022 : proceedings |
| ISBN |
|
| ISBN (electronic) |
|
| Series | Lecture Notes in Computer Science |
| Event | 44th European Conference on IR Research |
| Volume | Issue number | I |
| Pages (from-to) | 289-303 |
| Publisher | Cham: Springer |
| Organisations |
|
| Abstract |
E-commerce provides rich multimodal data that is barely leveraged in practice. One aspect of this data is a category tree that is being used in search and recommendation. However, in practice, during a user’s session there is often a mismatch between a textual and a visual representation of a given category. Motivated by the problem, we introduce the task of category-to-image retrieval in e-commerce and propose a model for the task, CLIP-ITA. The model leverages information from multiple modalities (textual, visual, and attribute modality) to create product representations. We explore how adding information from multiple modalities (textual, visual, and attribute modality) impacts the model’s performance. In particular, we observe that CLIP-ITA significantly outperforms a comparable model that leverages only the visual modality and a comparable model that leverages the visual and attribute modality.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1007/978-3-030-99736-6_20 |
| Permalink to this page | |
