trustML@Penn          home      people      publications


conference/journal

Michael S. Yao, Yimeng Zeng, Hamsa Bastani, Jacob Gardner, James C. Gee, Osbert Bastani. Generative Adversarial Model-Based Optimization via Source Critic Regularization. NeurIPS 2024. [arXiv]

Xinmeng Huang, Shuo Li, Edgar Dobriban, Osbert Bastani, Hamed Hassani, Dongsheng Ding. One-Shot Safety Alignment for Large Language Models via Optimal Dualization. NeurIPS (Spotlight) 2024. [arXiv]

Xinmeng Huang*, Shuo Li*, Mengxin Yu, Matteo Sesia, Hamed Hassani, Insup Lee, Osbert Bastani**, Edgar Dobriban**. Uncertainty in Language Models: Assessment through Rank-Calibration. EMNLP 2024. [arXiv]

William Liang, Sam Wang, Hung-Ju Wang, Yecheng Jason Ma, Osbert Bastani, Dinesh Jayaraman. Environment Curriculum Generation via Large Language Models. CoRL (Oral) 2024.

Kan Xu, Hamsa Bastani, Surbhi Goel, Osbert Bastani. Stochastic Bandits with ReLU Neural Networks. ICML 2024. [arXiv]

Jason Ma, William Liang, Hung-Ju Wang, Yuke Zhu, Linxi Fan, Osbert Bastani, Dinesh Jayaraman. Language Model Guided Sim-To-Real Transfer. RSS 2024. [arXiv]

DROID Dataset Team. DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset. RSS 2024. [arXiv]

Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani. TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction. NAACL 2024. [arXiv]

Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Jim Fan**, Anima Anandkumar**. Eureka: Human-Level Reward Design via Coding Large Language Models. ICLR 2024. [arXiv]

Alexander Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob Gardner, Yiming Yang, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert Bastani, Amir Yazdanbakhsh. Learning Performance-Improving Code Edits. ICLR (Spotlight) 2024. [arXiv]

Wenwen Si, Sangdon Park, Insup Lee, Edgar Dobriban, Osbert Bastani. PAC Prediction Sets Under Label Shift. ICLR 2024. [arXiv]

Charles Zhang*, Yunshuang Li*, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Jason Ma**, Lucas Weihs**. Universal Visual Decomposer: Long-Horizon Manipulation Made Easy. ICRA 2024. [arXiv]

Open X-Embodiment Collaboration. Open X-Embodiment: Robotic Learning Datasets and RT-X Models. ICRA 2024. [arXiv]

Kavi Gupta, Chenxi Yang, Kayla McCue, Osbert Bastani, Phillip Sharp, Christopher Burge, Armando Solar-Lezama. Improved modeling of RNA-binding protein motifs in an interpretable neural model of RNA splicing. Genome Biology 2024. [bioRxiv]

Haosen Ge, Hamsa Bastani, Osbert Bastani. Rethinking Fairness for Human-AI Collaboration. ITCS 2024. [arXiv]

Stephen Mell, Steve Zdancewic, Osbert Bastani. Optimal Program Synthesis via Abstract Interpretation. POPL 2024. [paper]

Yahan Yang, Sunghye Cho, Maxine Covello, Azia Knox, Osbert Bastani, James Weimer, Edgar Dobriban, Robert Schultz, Insup Lee, Julia Parish-Morris. Automatically Predicting Perceived Conversation Quality in a Pediatric Sample Enriched for Autism. INTERSPEECH 2023. [paper]

Yanju Chen, Chenglong Wang, Xinyu Wang, Osbert Bastani, Yu Feng. Fast and Reliable Program Synthesis via User Interaction. ASE 2023. [paper]

Adam Khakhar, Stephen Mell, Osbert Bastani. PAC Prediction Sets for Large Language Models of Code. ICML 2023. [arXiv]

Jason Ma, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman. LIV: Language-Image Representations and Rewards for Robotic Control. ICML 2023. [arXiv]

Kishor Jothimurugan, Steve Hsu, Osbert Bastani, Rajeev Alur. Robust Subtask Learning for Compositional Generalization. ICML 2023. [arXiv]

Stephen Mell, Favyen Bastani, Steve Zdancewic, Osbert Bastani. Synthesizing Trajectory Queries from Examples. CAV 2023. [paper]

Rajeev Alur, Osbert Bastani, Kishort Jothimurugan, Mateo Perez, Fabio Somenzi, Ashutosh Trivedi. Policy Synthesis and Reinforcement Learning for Discounted LTL. CAV 2023. [arXiv]

Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman. TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching. L4DC 2023. [arXiv]

Wenwen Si, Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani. Angelic Patches for Improving Third-Party Object Detector Performance. CVPR 2023. [paper]

Sangdon Park, Osbert Bastani, Taesoo Kim. ACon2: Adaptive Conformal Consensus for Provable Blockchain Oracles. USENIX Security 2023. [arXiv]

Wanqiao Xu, Jason Ma, Kan Xu, Hamsa Bastani, Osbert Bastani. Uniformly Conservative Exploration in Reinforcement Learning. AISTATS 2023. [arXiv]

Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar**, Amy Zhang**. VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training. ICLR (Spotlight) 2023. [arXiv]

Jason Ma, Jason Yan, Dinesh Jayaraman, Osbert Bastani. Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression. NeurIPS 2022. [arXiv]

Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, Aaron Roth. Practical Adversarial Multivalid Conformal Prediction. NeurIPS (Oral) 2022. [arXiv]

Osbert Bastani, Jason Ma, Estelle Shen, Wanqiao Xu. Regret Bounds for Risk-Sensitive Reinforcement Learning. NeurIPS 2022. [arXiv]

Halley Young, Maxwell Du, Osbert Bastani. Neurosymbolic Deep Generative Models for Sequence Data with Relational Constraints. NeurIPS 2022. [paper]

Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani. PAC Prediction Sets for Meta-Learning. NeurIPS 2022. [arXiv]

Souradeep Dutta, Kaustubh Sridhar, Osbert Bastani, Edgar Dobriban, James Weimer, Insup Lee, Julia Parish-Morris. Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option Templates. CoRL 2022. [arXiv]

Soham Dan, Osbert Bastani, Dan Roth. Understanding Robust Generalization in Learning Regular Languages. ICML 2022. [arXiv]

Sooyong Jang, Sangdon Park, Insup Lee, Osbert Bastani. Sequential Covariate Shift Detection Using Classifier Two-Sample Tests. ICML 2022. [paper]

Jason Ma, Andrew Shen, Dinesh Jayaraman, Osbert Bastani. SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching. ICML 2022. [arXiv]

Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur. Specification-Guided Learning of Nash Equilibria with High Social Welfare. CAV 2022. [arXiv]

George Tolkachev, Stephen Mell, Steve Zdancewic, and Osbert Bastani. Counterfactual Explanations for Natural Language Interfaces. ACL (Short) 2022. [paper]

Sangdon Park, Edgar Dobriban, Insup Lee, and Osbert Bastani. PAC Prediction Sets Under Covariate Shift. ICLR 2022. [arXiv]

Jason Ma*, Andrew Shen*, Osbert Bastani, Dinesh Jayaraman. Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning. AAAI 2022. [arXiv]

Jason Ma, Dinesh Jayaraman, Osbert Bastani. Conservative Offline Distributional Reinforcement Learning. NeurIPS 2021. [arXiv]

Yichen Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu, Armando Solar-Lezama, Martin Rinard. Program Synthesis Guided Reinforcement Learning for Partially Observed Environments. NeurIPS (Spotlight) 2021. [arxiv]

Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur. Compositional Reinforcement Learning from Logical Specifications. NeurIPS 2021. [arXiv]

Alexis Ross, Himabindu Lakkaraju, Osbert Bastani. Learning Models for Actionable Recourse. NeurIPS 2021. [arXiv]

Soham Dan, Osbert Bastani, Dan Roth. Few-Shot Novel Concept Learning for Semantic Parsing. EMNLP (Findings) 2021. [paper]

Jason Ma, Jeevana Priya Inala, Dinesh Jayaraman, Osbert Bastani. Likelihood-Based Diverse Sampling for Trajectory Forecasting. ICCV 2021. [paper]

Favyen Bastani, Songtao He, Ziwen Jiang, Osbert Bastani, Sam Madden. SkyQuery: An Aerial Drone Video Sensing Platform. Onward 2021. [paper]

Radoslav Ivanov, Kishor Jothimurugan, Steve Hsu, Shaan Vaidya, Rajeev Alur, Osbert Bastani. Compositional Learning and Verification of Neural Network Controllers. EMSOFT 2021. [paper]

Osbert Bastani, Shuo Li, Anton Xue. Safe Reinforcement Learning via Statistical Model Predictive Shielding. RSS 2021. [paper]

Kan Xu, Xuanyi Zhao, Hamsa Bastani, Osbert Bastani. Group-Sparse Matrix Factorization for Transfer Learning of Word Embeddings. ICML 2021. [arXiv]

Jocelyn Chen, Aaron Lamoreaux, Xinyu Wang, Greg Durrett, Osbert Bastani, Isil Dillig. Web Question Answering with Neurosymbolic Program Synthesis. PLDI 2021. [paper]

Osbert Bastani. Safe Reinforcement Learning with Nonlinear Dynamics via Model Predictive Shielding. ACC 2021. [arXiv] [code]

Kishor Jothimurugan, Osbert Bastani, Rajeev Alur. Abstract Value Iteration for Hierarchical Deep Reinforcement Learning. AISTATS 2021. [arXiv]

Min Wen, Osbert Bastani, Ufuk Topcu. Algorithms for Fairness in Sequential Decision Making. AISTATS 2021. [arXiv]

Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani. PAC Confidence Predictions for Deep Neural Network Classifiers. ICLR 2021. [arXiv]

Jeevana Priya Inala, Yichen Yang, James Paulos, Yewen Pu, Osbert Bastani, Vijay Kumar, Martin Rinard, Armando Solar-Lezama. Neurosymbolic Transformers for Multi-Agent Communication. NeurIPS 2020. [paper] [arXiv]

Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik. Generating Programmatic Referring Expressions via Program Synthesis. ICML 2020. [paper] [code]

Himabindu Lakkaraju, Nino Arsov, Osbert Bastani. Robust and Stable Black Box Explanations. ICML 2020. [paper] [arXiv]

Yanju Chen, Chenglong Wang, Osbert Bastani, Isil Dillig, Yu Feng. Program Synthesis using Deduction-Guided Reinforcement Learning. CAV 2020. [paper]

Shuo Li, Osbert Bastani. Robust Model Predictive Shielding for Safe Reinforcement Learning with Stochastic Dynamics. ICRA 2020. [paper] [arXiv]

Osbert Bastani. Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems. AISTATS 2020. [paper] [arXiv]

Sangdon Park, Osbert Bastani, Jim Weimer, Insup Lee. Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation. AISTATS 2020. [paper] [arXiv]

Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee. PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction. ICLR 2020. [paper] [arXiv]

Jeevana Priya Inala, Osbert Bastani, Zenna Tavares, Armando Solar-Lezama. Synthesizing Programmatic Policies that Inductively Generalize. ICLR 2020. [paper]

Himabindu Lakkaraju, Osbert Bastani. "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations. AIES 2020. [paper] [arXiv]

Kishor Jothimurugan, Rajeev Alur, Osbert Bastani. Composable Specifications for Reinforcement Learning. NeurIPS 2019. [paper] [presentation] [code]

Osbert Bastani, Xin Zhang, Armando Solar-Lezama. Verifying Fairness Properties via Concentration. OOPSLA 2019. [paper] [arXiv]

Jia Chen, Jiayi Wei, Yu Feng, Osbert Bastani, Isil Dillig. Relational Verification using Reinforcement Learning. OOPSLA 2019. [paper]

Zhengkai Wu, Evan Johnson, Wei Yang, Osbert Bastani, Dawn Song, Jian Peng, Tao Xie. REINAM: Reinforcement Learning for Input-Grammar Inference. FSE 2019. [paper]

Arbaaz Khan, Chi Zhang, Shuo Li, Jiayue Wu, Brent Schlotfeldt, Sarah Tang, Alejandro Ribeiro, Osbert Bastani, Vijay Kumar. Learning Safe Unlabeled Multi-Robot Planning with Motion Constraints. IROS 2019. [paper] [arXiv]

Halley Young, Osbert Bastani, Mayur Naik. Learning Neurosymbolic Generative Models via Program Synthesis. ICML 2019. [paper] [arXiv]

Osbert Bastani, Rahul Sharma, Lazaro Clapp, Saswat Anand, Alex Aiken. Eventually Sound Points-To Analysis with Missing Code. ECOOP 2019. [paper] [arXiv]

Osbert Bastani, Yewen Pu, Armando Solar-Lezama. Verifiable Reinforcement Learning via Policy Extraction. NeurIPS 2018. [paper] [arXiv] [presentation] [poster] [code]

Osbert Bastani, Rahul Sharma, Alex Aiken, Percy Liang. Active Learning of Points-To Specifications. PLDI 2018. [paper] [arXiv] [presentation] [code]

Yu Feng, Ruben Martins, Osbert Bastani, Isil Dillig. Program Synthesis using Conflict-Driven Learning. PLDI 2018 (Distinguished Paper). [paper] [arXiv]

Osbert Bastani, Rahul Sharma, Alex Aiken, Percy Liang. Synthesizing Program Input Grammars. PLDI 2017. [paper] [arXiv] [presentation] [code]

Yu Feng, Osbert Bastani, Ruben Martins, Isil Dillig, Saswat Anand. Automated Synthesis of Semantic Malware Signatures using Maximum Satisfiability. NDSS 2017. [paper] [arXiv]

Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya Nori, Antonio Criminisi. Measuring Neural Net Robustness with Constraints. NeurIPS 2016. [paper] [arXiv] [poster] [code]

Lazaro Clapp, Osbert Bastani, Saswat Anand, Alex Aiken. Minimizing GUI Event Traces. FSE 2016. [paper]

Osbert Bastani, Saswat Anand, Alex Aiken. Interactively verifying absence of explicit information flows in Android apps. OOPSLA 2015. [paper] [presentation]

Osbert Bastani, Saswat Anand, Alex Aiken. Specification inference using context-free reachability. POPL 2015. [paper] [presentation]

Osbert Bastani, Christopher Hillar, Dimitar Popov, Maurice Rojas. Randomization, sums of squares, near-circuits, and faster real root counting. Contemporary Mathematics 556 (2011): 145-166. [paper]


book chapters

Hamsa Bastani, Osbert Bastani, Tsai-Hsuan (Angel) Chung. Optimizing Health Supply Chains in LMICs with Machine Learning: A Case Study in Sierra Leone. Sustainable and Responsible Operations - The New Frontier, 2024. [chapter]

Rajeev Alur, Suguman Bansal, Osbert Bastani, Kishor Jothimurugan. A Framework for Transforming Specifications in Reinforcement Learning. Springer Festschrift in honor of Prof. Tom Henzinger, 2022. [arXiv]

Osbert Bastani, Jeevana Inala, Armando Solar-Lezama. Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis. xxAI - Beyond Explainable Artificial Intelligence, 2022. [chapter]


workshop

Kan Xu, Hamsa Bastani, Osbert Bastani. Robust Generalization of Quadratic Neural Networks via Function Identification. ICML Workshop on UDL 2021. [arXiv]

Hamsa Bastani, Osbert Bastani, Park Sinchaisri. Improving Human Decision-Making with Machine Learning. ICML Workshop on HumanAI 2021. [arXiv]

Jeevana Priya Inala, Jason Ma, Osbert Bastani, Xin Zhang, Armando Solar-Lezama. Safe Human-Interactive Control via Shielding. RSS Workshop on Social Robot Navigation 2021. [arXiv]

Sadra Sadraddini, Shen Shen, Osbert Bastani. Polytopic Trees for Verification of Learning-Based Controllers. Workshop on NSV 2019. [paper]

Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya Nori, Antonio Criminisi. Measuring Neural Net Robustness with Constraints. DARS Workshop 2018 (Invited Paper). [paper] [presentation]

Osbert Bastani, Carolyn Kim, Hamsa Bastani. Interpretability via Model Extraction. FAT/ML Workshop 2017. [paper] [arXiv] [extended] [poster] [code]

Osbert Bastani, Saswat Anand, Alex Aiken. An interactive approach to mobile app verification. MobileDeLi Workshop 2015 (Invited Paper). [paper]


other

Osbert Bastani, Xin Zhang, Armando Solar-Lezama. Synthesizing Queries via Interactive Sketching. [arXiv]

Wenbo Zhang, Osbert Bastani, Vijay Kumar. MAMPS: Safe Multi-Agent Reinforcement Learning via Model Predictive Shielding. [arXiv]

Carolyn Kim, Osbert Bastani. Learning Interpretable Models with Causal Guarantees. [arXiv]

Brian Heath, Neelay Velingker, Osbert Bastani, Mayur Naik. PolyDroid: Learning-Driven Specialization of Mobile Applications. [arXiv]

Osbert Bastani. Beyond Deductive Inference in Program Analysis. Ph.D. Thesis, 2018. [thesis] [defense]