Yash Goyal

Email: yashgoyal.yg1-at-gmail.com

I am a Research Scientist at Samsung - SAIT AI Lab Montreal within Mila.

Previously, I was a PhD student in the School of Interactive Computing at Georgia Tech. I was advised by Dhruv Batra. I also collaborated closely with Devi Parikh.

I used to co-organize the annual VQA Challenge.

As a research intern, I have spent time at Google Brain in spring+summer 2019, Facebook AI Research in spring+summer 2017, at Army Research Laboratory (ARL) Adelphi in summer 2015, and at Duke University in summer 2013.

Email  /  CV  /  Google Scholar  /  arXiv

Research

Image Generative Models, Bias, Explainable AI, Vision & Language.

Publications

MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Oscar Manas, Pau Rodriguez*, Saba Ahmadi*, Aida Nematzadeh, Yash Goyal, Aishwarya Agrawal

The European Chapter of the Association for Computational Linguistics (EACL), 2023

Image Retrieval from Contextual Descriptions
Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, Siva Reddy

The Association for Computational Linguistics (ACL), 2022

Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project
William Ferguson, Dhruv Batra, Raymond Mooney, Devi Parikh, Antonio Torralba, David Bau, David Diller, Josh Fasching, Jaden Fiotto-Kaufman, Yash Goyal, Jeff Miller, Kerry Moffitt, Alex Montes de Oca, Ramprasaath R Selvaraju, Ayush Shrivastava, Jialin Wu, Stefan Lee

Applied AI Letters, 2021

Explaining Classifiers with Causal Concept Effect (CaCE)
Yash Goyal, Amir Feder, Uri Shalit, Been Kim

ArXiv, 2019

Counterfactual Visual Explanations
Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, Stefan Lee

International Conference on Machine Learning (ICML), 2019

Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal, Tejas Khot, Aishwarya Agrawal, Douglas Summers-Stay, Dhruv Batra, Devi Parikh

International Journal of Computer Vision (IJCV), 2018
Project Website, Demo

Resolving Language and Vision Ambiguities Together: Joint Segmentation & Prepositional Attachment Resolution in Captioned Scenes
Gordon Christie*, Ankit Laddha*, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, Dhruv Batra
*equal contribution

Computer Vision and Image Understanding (CVIU), 2017

Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal*, Tejas Khot*, Douglas Summers-Stay, Dhruv Batra, Devi Parikh
*equal contribution

Computer Vision and Pattern Recognition (CVPR), 2017
Project Website, Demo

We counter the language priors present in the popular Visual Question Answering (VQA) dataset (Antol et al., ICCV 2015) and make vision (the V in VQA) matter! Specifically, we balance the VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset will be publicly released as part of the 2nd iteration of the Visual Question Answering Challenge (VQA v2.0).

Resolving Language and Vision Ambiguities Together: Joint Segmentation & Prepositional Attachment Resolution in Captioned Scenes
Gordon Christie*, Ankit Laddha*, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, Dhruv Batra
*equal contribution

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016

We present an approach to simultaneously perform semantic segmentation and prepositional phrase attachment resolution for captioned images. We show that our vision and language modules have complementary strengths, and that joint reasoning produces more accurate results than any module operating in isolation.

Towards Transparent AI Systems: Interpreting Visual Question Answering Models
Yash Goyal, Akrit Mohapatra, Devi Parikh, Dhruv Batra

International Conference on Machine Learning (ICML) Workshop on Visualization for Deep Learning, 2016
[Best Student Paper]
Interactive Visualizations: Question and Image

In this paper, we experimented with two visualization methods -- guided backpropagation and occlusion -- to interpret deep learning models for the task of Visual Question Answering. Specifically, we find what part of the input (pixels in images or words in questions) the VQA model focuses on while answering a question about an image.

Yin and Yang: Balancing and Answering Binary Visual Questions
Peng Zhang*, Yash Goyal*, Douglas Summers-Stay, Dhruv Batra, Devi Parikh
*equal contribution

Computer Vision and Pattern Recognition (CVPR), 2016
Data and Code

We balance the existing VQA dataset so that VQA models are forced to understand the image to improve their performance. We propose an approach that focuses heavily on vision and answers the question by visual verification. Dataset and Code will be available soon!

CloudCV: Large-Scale Distributed Computer Vision as a Cloud Service
Harsh Agrawal, Clint Solomon Mathialagan, Yash Goyal, Neelima Chavali, Prakriti Banik, Akrit Mohapatra, Ahmed Osman, Dhruv Batra

Book Chapter, Mobile Cloud Visual Media Computing
Editors: Gang Hua, Xian-Sheng Hua. Springer, 2015.
Website

We present a comprehensive system to provide access to state-of-the-art distributed computer vision algorithms as a cloud service through a Web Interface and APIs.

Design of a physiologically informed virtual reality based interactive platform for individuals with upper limb impairment
Deepesh Kumar, Yash Goyal, Sunil Nair, Arvind Chauhan, Uttama Lahiri

IEEE International System on Robot and Human Interactive Communication (Ro-MAN 2014), UK

Talks

[Website courtesy]: The website template is based on Jon Barron's website.