Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
| Authors | |
|---|---|
| Publication date | 02-11-2023 |
| Book title | Thirty-seventh Annual Conference on Neural Information Processing Systems |
| Event | 37th Annual Conference on Neural Information Processing Systems |
| Number of pages | 24 |
| Publisher | OpenReview |
| Organisations |
|
| Abstract |
Parameter-efficient fine-tuning (PEFT) of pre-trained language models (PLMs) has emerged as a highly successful approach, with training only a small number of parameters without sacrificing performance and becoming the de-facto learning paradigm with the increasing size of PLMs. However, existing PEFT methods are not memory-efficient, because they still require caching most of the intermediate activations for the gradient calculation, akin to fine-tuning. One effective way to reduce the activation memory is to apply a reversible model, so the intermediate activations are not necessary to be cached and can be recomputed. Nevertheless, modifying a PLM to its reversible variant is not straightforward, since the reversible model has a distinct architecture from the currently released PLMs. In this paper, we first investigate what is a key factor for the success of existing PEFT methods, and realize that it's essential to preserve the PLM's starting point when initializing a PEFT method. With this finding, we propose memory-efficient fine-tuning (MEFT) that inserts adapters into a PLM, preserving the PLM's starting point and making it reversible without additional pre-training. We evaluate MEFT on the GLUE benchmark and five question-answering tasks with various backbones, BERT, RoBERTa, BART and OPT. MEFT significantly reduces the activation memory up to 84% of full fine-tuning with a negligible amount of trainable parameters. Moreover, MEFT achieves the same score on GLUE and a comparable score on the question-answering tasks as full fine-tuning. A similar finding is also observed for the image classification task.
|
| Document type | Chapter |
| Language | English |
| Published at | https://openreview.net/forum?id=J8McuwS3zY |
| Other links | https://openreview.net/group?id=NeurIPS.cc/2023/Conference#tab-accept-oral |
| Downloads |
Make Pre-trained Model Reversible
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |