Over the last two decades, there has been tremendous progress in leveraging machine learning to solve fundamental problems in artificial intelligence. Nevertheless, a number of challenges remain for deploying these techniques in real-world applications. Our research focuses on addressing three key challenges:
We discuss our work along each of these three thrusts in more detail below. Our approaches draw on techniques spanning learning theory, programming languages, formal methods, and control theory; furthermore, we are interested in applications to robotics, healthcare, and software systems, among others.
Currently, we are particularly interested in understanding robust generalization (i.e., to out-of-distribution examples), which is a critical feature of human learning that is lacking in deep learning. From a theoretical perspective, we have drawn a connection between robust generalization and model identification [ICML UDL Workshop 2021]; intuitively, to robustly generalize, a learning algorithm must be able to identify the “true” model. From a practical standpoint, we have shown that program synthesis can generalize robustly [ICLR 2020, EMNLP (Findings) 2021], and have developed algorithms for quantifying uncertainty in the face of distribution shift [AISTATS 2020, ICML UDL Workshop 2021].
How can we ensure machine learning systems run efficiently?