Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Large language models like GPT-3 can be prompted with in-context examples or instructions to complete tasks without fine-tuning the model’s parameters. Prompting allows handling open-ended queries without introducing large numbers of learnable parameters. However, manually crafting a successful prompt to maximize the likelihood of the desired output is challenging (Hard prompts). For specific downstream tasks, domain adaptation may be required. This motivates soft prompts - appending tunable vectors to the input to steer the model toward desired outputs. Soft prompts help handle low-data domains and improve generalization without exhaustive prompt engineering.
Published:
The introduction of GPT-3 completely revolutionized natural language processing by enabling few-shot learning through prompt engineering rather than fine-tuning. However, language models still struggle at zero-shot performance on tasks dissimilar from their pretraining data.
Published:
Let’s start this blog with a task. We have to train a model which concatenates the last letters of 2 input words. For example, if the input words are ‘Elon’ and ‘Musk’, the model should return ‘nk’. If we use supervised learning to train said model, we will need many examples with variation of words containing different end letters to create a model which gives the correct output. One might argue that we can use few shot learning with LLMs like GPT-3 to solve this problem. However, the model still isn’t able to produce the right output.
Published:
Since the public release of LLM-based chat assistants like ChatGPT, there has been a large emphasis on aligning AI language models to prevent the production of undesirable or harmful content. One approach is to use reinforcement learning from human preferences to optimize a pre-trained language model by learning a reward function based on human preferences [1]. Constitutional AI [2] further removes the need for “human” preferences by training a reward model from AI feedback refined using safety instructions. The recently released Llama-2 model [3] also uses safety and helpfulness criteria to learn an RLHF-like model that improves alignment in open-source LLMs.
Tunable Visual prompting using data augmentation techniques such as DeepAugment, CutMix and CutOut for robust predictions for vision-language models like CLIP. Performance evaluations led to 2-3% improvement on different corruptions of CIFAR-C dataset
Identified salient features in input images and generated adversarial masks using various techniques such as saliency gradient maps, GRAD-CAM and random patch masking. Created non-private representations of the input images using latent diffusion models, so that private information is not transmitted to downstream tasks such as FaceNet’s recognition model.
Conducted ablation studies for exploring efficient heterogeneous (CPU + GPU) distributed training for language models such as BeRT and RoBeRTa over different factors such as batch size, number of CPU/GPU parallel workers etc. Ray was used for parallelizing CPU processes and Deepspeed’s ZeRO optimization was used for data parallelism along with mixed precision training for sentiment analysis.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.