UT Dallas > Computer Science > Conference > UT Dallas CS Professors Make a Strong Impact at AAAI’21

UT Dallas CS Professors Make a Strong Impact at AAAI’21

Members of UT Dallas’s Machine Learning Center, directed by Professor Sriraam Natarajan and the Cyber Security Research and Education Institute, directed by Professor Bhavani Thuraisingham, have published seven papers at AAAI 2021 (Association for the Advancement of Artificial Intelligence), a top-tier AI conference held virtually in February 2021. The AI and Cyber Security researchers have collaborated in recent years; their papers range from robust learning to cyber defense. Below we discuss some of the major breakthroughs the teams have made with their students and external partners from academia and federal research labs.

Assistant Professor Rishabh Iyer’s paper titled GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning is coauthored with his student Krishnateja Killamsetty and collaborators Durga Sivasubramanian and  Ganesh Ramakrishnan, both from the Indian Institute of Technology Bombay. It addresses the problem of data subset selection (i.e., using small subsets of large datasets) with the goal being robust and efficient learning. In particular, the authors formulate GLISTER as a mixed discrete-continuous, bi-level optimization problem to select a subset of the training data, which maximizes the log-likelihood on a held-out validation set. The authors show some interesting connections to submodular optimization for several loss functions and also study convergence properties of the resulting algorithms. From an empirical perspective, they show that with GLISTER, we can achieve between 5x – 7x speedups (and similar cost/energy savings) across several deep learning datasets with negligible loss inaccuracy.

Professor Natarajan’s paper titled: Relational Boosted Bandit is coauthored with collaborators Ashutosh Kakadiya and Balaraman Ravindran, both from the India Institute of Technology Madras. It proposes Relational Boosted Bandits (RB2) to address tasks such as link prediction, relational classification, and recommendations. Recently, contextual bandits algorithms have become quite popular in many domains such as recommendation systems. However, they use a restricted representation that requires significant feature engineering for domains like social networks, which are inherently relational. On the other hand, statistical relational models, while expressive, can be expensive in training and inference, making them difficult to be employed in online settings. RB2 is the first contextual bandits framework for relational domains with the exploration-exploitation abilities of bandits. RB2 enables us to learn interpretable and explainable models due to the more descriptive nature of the relational representation. The authors’ extensive experiments demonstrate both the efficiency and efficacy of the approach for online learning in relational domains.

Associate Professor Feng Chen’s paper titled: Multidimensional Uncertainty-Aware Evidential Neural Networks is coauthored with students Yibo Hu, Yuzhe Ou, Xujiang Zhao, and Jin-Hee Cho, a collaborator from Virginia Tech. It proposes a new approach to predicting the inherent uncertainties of a deep neural network (DNN) that are derived from different root causes in training data, such as vacuity (i.e., uncertainty due to a lack of evidence) or dissonance (i.e., uncertainty due to conflicting evidence). Notably, we consider inherent uncertainties derived from different root causes for DNNs by taking a hybrid approach that leverages both deep learning and belief model (i.e., Subjective Logic or SL). Although both fields have studied uncertainty-aware approaches to tackle various kinds of decision-making problems, there has been a lack of effort to leverage both of their merits. We believe this work sheds light on the direction of incorporating both fields.

Professor Murat Kantarcioglu and his team published two papers at this conference. The first paper is with his student Mustafa Safa Ozdayi and colleague Prof Yulia Gel of the Department of Statistics. The paper is titled: Defending against Backdoors in Federated Learning with Robust Learning Rate and describes defenses against backdoor attacks. Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attack against FL is via the backdoor. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, the authors proposed a lightweight defense that requires minimal change to the FL protocol. At a high level, the defense is based on carefully adjusting the aggregation server’s learning rate, per dimension and per round, based on the sign information of agents’ updates. Initially, the first necessary steps are to carry out a successful backdoor attack in FL setting through conjecture, and then subsequently, explicitly formulate the defense based on the conjecture. Through experiments, empirical evidence is provided, which supports the conjecture, and we then test our defense against backdoor attacks under different settings. They observe that either the backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, the experiments suggest that the defense significantly outperforms some of the recently proposed defenses in the literature. They achieve this by having minimal influence over the accuracy of the trained models. In addition, they also provide convergence rate analysis for the proposed scheme.

Professor Kantacioglu’s second paper is with his former Ph.D. student Yasmeen Alufaisan, research scientist Yan Zhou, and researchers from the US Army Research Laboratory Laura R. Marusich and Jonathan Z. Bakdash. The paper is titled: Does Explainable Artificial Intelligence Improve Human Decision-Making? and discusses experiments on how human decision-making helps with explainable AI. Explainable AI provides insights to users for model predictions, offering the potential for users to better understand and trust a model and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has typically focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings as to whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, the authors compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with an explanation. They find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, they observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect. However, this was not significantly affected by including an explanation. Their results indicate, at least in some situations, why the information provided in explainable AI may not enhance user decision-making, leading to the fact that further research may be needed to understand how to integrate explainable AI into real systems.

Professor Latifur Khan’s paper titled: Single View Point Cloud Generation via Unified 3D Prototype is coauthored with his students and collaborators Yu Lin, Yigong Wang, Yifan Li, Yang Gao, and Zhuoyi Wang at UT Dallas. Their paper proposes a novel deep learning solution to a challenging problem: predicting a complete 3D shape from a single image. Comparing to the previous state-of-the-art approaches that either focus on the regularly sampled data (i.e., Voxel grids and Multi-view images) or ignore the rich 3D shape information concealed in the point cloud datasets. On the other hand, their framework solves this problem by simultaneously considering prototypical 3D shape features and corresponding 2D image features. Specifically, this framework deforms multiple 2-manifolds, endowed with prototypical features and image features, onto the surface of the target point cloud. This work explores a new area of single-view 3D reconstruction.

Professor Gopal Gupta’s paper titled: Knowledge-driven Natural Language Understanding of English Text and its Applications is coauthored with his students and collaborators Kinjal BasuSarat VaranasiFarhad Shakerin, and Joaquin Arias at UT Dallas. Their paper introduces a novel knowledge driven semantic representation approach for English text. By leveraging the VerbNet lexicon, we are able to map syntax tree of the text to its commonsense meaning represented using basic knowledge primitives. The general-purpose knowledge represented from our approach can be used to build any reasoning based NLU system that can also provide justification. We applied this approach to construct two NLU applications that we present here: SQuARE (Semantic-based Question Answering and Reasoning Engine) and StaCACK (Stateful Conversational Agent using Commonsense Knowledge). Both these systems work by “truly understanding” the natural language text they process and both provide natural language explanations for their responses while maintaining high accuracy

Professor Haim Schweitzer and his PhD student Guihong Wan presented the paper titled: “Accelerated Combinatorial Search for Outlier Detection With Provable Bounds on Sub-Optimality.” Outliers are irregular data points that negatively affect the accuracy of data analysis. Detecting and removing them from the data is essential to the accuracy of many data analysis applications. Tools that perform outlier detection and removal can be found in most commercial statistics packages. The paper shows that classical search techniques from Artificial Intelligence can be retooled for the problem of detecting and removing outliers to improve the Principal Component Analysis of data. Using our techniques produce results that improve upon the traditional approaches to this problem. One variant of our algorithm is guaranteed to compute the optimal result according to some natural criteria. It is currently the fastest optimal algorithm, but its running time is still too slow for large datasets. Another variant is much faster and comes with provable bounds on sub-optimality. Experiments show that when applied to real datasets, the bounds computed by our method typically guarantee an approximation within 10-15% of the optimum. There are no previous algorithms to this problem that provide similar
guarantees.

Last but not least, Professor Bhavani Thuraisingham published a paper titled Progressive One-shot Human Parsing with collaborators Haoyu HeJing Zhang and Dacheng Tao from the University of Sydney, titled GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning. Prior human parsing models are limited to parsing humans into classes pre-defined in the training data which is not flexible to generalize to unseen classes, e.g., new clothing in fashion analysis. This paper proposes a new problem named one-shot human parsing (OSHP), which requires parsing humans into an open set of reference classes defined by any single reference example. During training, only base classes defined in the training set are exposed, which can overlap with part of reference classes. In this paper, they devise a novel Progressive One-shot Parsing network (POPNet) to address two critical challenges, i.e., testing bias and small sizes. POPNet consists of two collaborative metric learning modules named Attention Guidance Module and Nearest Centroid Module, which can learn representative prototypes for base classes and quickly transfer the ability to unseen classes during testing, thereby reducing testing bias. Moreover, POPNet adopts a progressive human parsing framework that can incorporate the learned knowledge of parent classes at the coarse granularity to help recognize the fine granularity’s descendant classes, thereby handling the small sizes issue. Experiments on the ATR-OS benchmark tailored for OSHP demonstrate POPNet outperforms other representative one-shot segmentation models by large margins and establishes a strong baseline.

As can be seen, UT Dallas professors have worked with their students as well as with national and international collaborators in academia and federal research labs to address cutting-edge research topics ranging from foundations and theories in neural networks, deep learning and explainable AI to applications in cyber security. Bhavani Thuraisingham states that “such breakthroughs will not only advance the state-of-the-art in Artificial Intelligence but also provide effective solutions to the challenging problems we are facing today from malware attack prevention to managing the COVID-19 pandemic.”


ABOUT THE UT DALLAS COMPUTER SCIENCE DEPARTMENT

The UT Dallas Computer Science program is one of the largest Computer Science departments in the United States with over 3,315 bachelors-degree students, more than 1,110 master’s students, 165 Ph.D. students,  52 tenure-track faculty members, and 44 full-time senior lecturers, as of Fall 2019. With the University of Texas at Dallas’ unique history of starting as a graduate institution first, the CS Department is built on a legacy of valuing innovative research and providing advanced training for software engineers and computer scientists.