I'm Vaish Shrivastava, a Master's in Computer Science student at Stanford University. At Stanford, I am grateful to be advised by Prof. Percy Liang, working on techniques to improve prompting of large language models for more robust reasoning.

My passion is building language models capable of reasoning about complex, multi-step problems in commonsense reasoning, mathematical problem solving, and problems that are difficult to decompose.

I am interested in a range of related subfields in NLP including question answering and retrieval augmented models.

Before Stanford, I was an Applied Scientist at Microsoft Search, Assistant and Intelligence, where I work on developing efficient, large-scale Machine Learning systems deployed to millions of users!

At Microsoft, I worked on low-cost compression techniques for pre-trained models, personalizing language models through parameter-efficient learning techniques, and multi-turn dialog modeling. I also collaborated with Microsoft Research.
Before joining Microsoft full-time, I went to Caltech for my Bachelor's in Computer Science, and enjoyed working on multi-task reinforcement learning.

vshrivas vaish.shrivastava@stanford.edu vaish-shrivastava vaish.shrivastava



For more information, here is my CV.