Prompt tuning methods for pre-trained language models. P-Tuning v1 ("GPT Understands, Too") introduced trainable continuous prompts. P-Tuning v2 showed prompt tuning can match fine-tuning across scales and tasks. v2 published at ACL 2022.

Paper

arXiv: 2110.07602

Venue: ACL 2022

trainingefficiencyresearch