Visual Prompting
Published:
Large language models like GPT-3 can be prompted with in-context examples or instructions to complete tasks without fine-tuning the model’s parameters. Prompting allows handling open-ended queries without introducing large numbers of learnable parameters. However, manually crafting a successful prompt to maximize the likelihood of the desired output is challenging (Hard prompts). For specific downstream tasks, domain adaptation may be required. This motivates soft prompts - appending tunable vectors to the input to steer the model toward desired outputs. Soft prompts help handle low-data domains and improve generalization without exhaustive prompt engineering.