UT Dallas > Computer Science > Grant > Drs. Gogate’s and Ruozzi’s Explainable AI Project Funded by DARPA for $1.8 Million

Drs. Gogate’s and Ruozzi’s Explainable AI Project Funded by DARPA for $1.8 Million

The Artificial Intelligence Group within the UT Dallas Computer Science Department consists of seven faculty members and is ranked 7th in the nation for its research in Natural Language Processing (NLP) and 13th for its research on overall Artificial Intelligence (AI). One of the major thrusts of research in machine learning is the Explainable AI (XAI) Project funded by Defense Advanced Research Projects Agency (DARPA) for $1.8 Million, headed by Drs. Vibhav Gogate and Nicholas Ruozzi. This project aims to improve upon current state-of-the-art machine learning techniques (e.g., deep neural networks) that produce models that require a large number of learned parameters and a significant amount of hyper-parameter tuning. Once learned, these models can be quite opaque to end-users; it can be difficult to understand why the chosen set of parameters and hyper-parameters perform well for the given learning/prediction task. Drs. Gogate and Ruozzi’s project aims to build real-world machine learning models that are accurate, scalable, deep, interpretable, and explainable. In particular, the proposed models, once learned from data, can efficiently find ranked, possibly related explanations for their decisions and predictions in the presence of observations that are incomplete, noisy, and in multiple modalities such as video, audio, and text. This capability is crucial in building an AI system that can accurately solve the interactive real-world challenge problem considered in this project: recognizing activities from video, audio and text descriptions. The project is of great significance because many countries are introducing legislation requiring that if a machine learning tool is used for any decision-making then the decisions reached by the tool should be explainable to the individual impacted by the decision. Current machine learning tools act like a black-box and are unable to explain the logic used in arriving at a result.

Drs. Gogate and Ruozzi’s project will develop a unified approach for explainable AI, based on tractable probabilistic logic models (TPLMs), which are a powerful family of representations that they have developed over the past few years. These representations include popular ones such as decision trees, binary decision diagrams (BDDs), cutset networks, sum-product networks, arithmetic circuits, sentential decision diagrams, first-order arithmetic circuits and tractable Markov logic, and are the basis of a large body of research in the machine learning community.

TPLMs have several desirable properties. First, a large number of typical query types (which includes estimation and prediction queries) are tractable in TPLMs, and as a result, answers to queries as well as explanations derived from them are guaranteed to be fast, accurate, and robust. Second, they are highly interpretable. At a high level, they bring the power of first-order logic, latent variables, and probability to decision trees, and inherit the latter’s superior interpretability. Third, they are compositional; in particular, it is both easy and efficient to add background knowledge as well as possibly contradictory evidence (after a wrong explanation is provided) to TPLMs. Moreover, as the project will show it is possible to integrate popular machine learning classifiers such as support vector machines and deep neural networks with TPLMs, by having the former serve as base models that appear at leaves of a TPLM, and then jointly learning the combined model. Finally, they are rich representations; they can compactly represent relational structure (via first-order or relational logic), uncertainty, and complex combinatorial constraints and objects such as parse trees and shapes, and reason about them in a mathematically rigorous manner.

The UT Dallas XAI Project is part of a larger DARPA-sponsored project involving UT Dallas (lead), University of California (UCLA), Texas A&M, and Indian Institute of Technology Delhi. The project will run until 2021.

“Dr. Gogate’s and Ruozzi’s XAI project will significantly advance the state-of-the-art in machine learning,” said Dr. Gopal Gupta, Professor and Department Head of Computer Science. “The UT Dallas Machine Learning Group is one of the leading groups in the world, and it’s no surprise that they have been tasked by DARPA to solve a difficult problem in machine learning,” he added.


ABOUT THE UT DALLAS COMPUTER SCIENCE DEPARTMENT

The UT Dallas Computer Science program is one of the largest Computer Science departments in the United States with over 2,400 bachelors-degree students, more than 1,000 master’s students, 150 Ph.D. students,  53 tenure-track faculty members and 38 full-time senior lecturers, as of Fall 2017. With The University of Texas at Dallas’ unique history of starting as a graduate institution first, the CS Department is built on a legacy of valuing innovative research and providing advanced training for software engineers and computer scientists.