Skip to main content

About Us

Our lab develops machine learning and artificial intelligence methodologies for learning causal effects from complex observational and experimental datasets. Our mission is to automate causal inference and make it accessible to decision-makers across various domains. Our methods are motivated by and have been applied in fields such as sustainability, healthcare, operations management, and digital experimentation. The lab's principal investigator has led the development of open-source software widely used in the industry, and the lab continues to support and develop open-source tools that lower barriers to entry in causal machine learning for data scientists.

Supported By

NSF

Google

Amazon

Bodossaki Foundation

News

Applications

Sustainability

Causal AI for Sustainability

Our lab works on understanding causal factors of vulnerability of a region to natural hazards, with the aim of informing appropriate policy interventions.
Healthcare

Causal AI for Healthcare

Our lab works on topics such as detecting implicit biases in medical decisions, understanding the causal effects of latent treatment dimensions from observational patient treamtent trajectories and evaluation of policy changes in Kidney Exchange systems.
Operations Management

Causal AI for Operations

Our lab works on data-driven pricing, customer segmentation, return-on-investment and the impact of GenAI at work.
Digital Experiments

Causal AI for Digital Experiments

Our lab works on developing data analytic tools for experimentation in digital platforms, such as estimation of heterogeneous effects from large scale A/B tests and recommendation A/B tests.

Foundations

Non-Parametric Instrumental Variables

Non-Parametric Instrumental Variables

Instrumental variables (IV) is a powerful technique for parsing causal effects from observational data with unobserved confounding, leveraging natural experimentation. Classical IV methods either make strong parametric assumptions, or are fully non-parametric and suffer from the curse of dimensionality (only applicable with a small number of variables). We have developed generic ML techniques for instrumental variable methods, allowing for flexible causal modeling when having access to instruments.
Adaptive Experiments

Adaptive Experiments

Adaptive experiments can lead to better decisions by appropriately changing the randomization probabilities based on past observations. Our lab works on novel adaptive experimentation algorithms that can handle high-dimensional settings. Adaptation can also lead to problems when it comes to retrospective analysis and inference. Our lab has introduced many techniques for the construction of confidence intervals for off-policy evaluation from adaptively collected data.

Dynamic Treatment Regimes

Dynamic Treatment Regimes

In many settings, units receive multiple treatments over time in an adaptive and autocorrelated manner. Causal inference in such settings is substantially harder as we need to take into account the intertemporal effects of the treatments. The area typically goes by the name of dynamic treatment regime. Our lab has developed ML based estimation techniques for the evaluation and inference of complex dynamic policies from offline observational data.
Neural-Causal Models

Neural Causal Models

Causal inference can be framed as a neural generative model problem. This lends to general purpose algorithms for identification and partial identification via stochastic gradient descent training. Our lab develops methods and provable guarantees for such neural-causal approaches to causal identification and estimation.
Causal Representation Learning

Causal Representation Learning

When dealing with complex data modalities such as text and images, the raw data do not correspond to the high level causal variables on which we want to perform our causal analysis. Causal representation learning aims to uncover such causal latent factors in an automated manner. Our lab has proven identifiability guarantees and associated algorithms for causal latent factor discovery.
Sensitivity Analysis

Sensitivity Analysis

Causal inference is necessarily based on domain assumptions. Sensitivity analysis offers a way to quantify the impact on the causal quantity of the violation of these assumptions. Our lab has developed general purpose sensitivity bound construction procedures with rigorous statistical guarantees, even when ML tecnniques are used in the estimation process.

Minimax Lower Bounds

Minimax Lower Bounds for Causal Estimation

The causal inference literature has produced many estimation algorithms that each have their pros and cons. A guiding principle on whether there is room for improvement is minimax optimality frameworks: what is the best achievable estimation quality, when we assume that our data stem from a family of distributions. Our lab has developed such minimax optimal lower bounds that match existing upper bounds of popular algorithms in a structure agnostic minimax optimal framework that is a good fit to estimation methods that use ML techniques as black-boxes.
Robust Statistics and Causal Inference

Robust Statistics and Causal Inference

Real data always contain outliers or small "exceptional" sub-populations. Our lab develops techniques that can deal with such outliers in the context of causal estimation.
Causal Inference and Incentives

Causal Inference and Incentives

Experimentation in real world systems, typically involves interacting with human subjects, each having their own incentives and trying to maximize their utility. Our lab develops experimentation and inference procedures that account for such incentives.
Learning from Experts

Learning from Experts

Expert demonstration data can augment ML algorithms either by informing adaptive experiments or by informing generative models of the human preferences that their outputs need to align with. Our lab develops learning algorithms that can leverage such datasets and improve learning or quality of models.

Teaching


People

Syrgkanis
Vasilis Syrgkanis
Lab PI, Asst. Professor (MS&E)
Whitehouse
Justin Whitehouse
Postdoctoral Researcher
Sojitra
Ravi Sojitra
PhD Student (MS&E)
Lan
Hui Lan
PhD Student (ICME)
Cheedambaram
Keertana Cheedambaram
PhD Student (MS&E)
Tan
Jiyuan Tan
PhD Student (MS&E)
Jin
Jikai Jin
PhD Student (ICME)
Lin
Shiangyi Lin
PhD Rotation Student (ICME)
Sawarni
Ayush Sawarni
PhD Rotation Student (MS&E)
Xie
Chenghan Xie
PhD Rotation Student (MS&E)
Xu
Calvin Xu
Undergrad Researcher (CS)
Seetharaman
Karthik Seetharaman
Undergrad Researcher (Math)
Chawla
Saanvi Chawla
Undegrad Researcher (CS)

Alumni

Yifan Wu
Yifan Wu
Visiting PhD Student (23-24)

Publications

2025

  1. A Meta-learner for Heterogeneous Effects in Difference-in-Differences
    Hui Lan, Haoge Chang, Eleanor Dillon, Vasilis Syrgkanis, Arxiv25
  2. Detecting clinician implicit biases in diagnoses using proximal causal inference
    Kara Liu, Russ Altman, Vasilis Syrgkanis, Pacific Symposium on Biocomputing 2025

2024

  1. Predicting Long Term Sequential Policy Value Using Softer Surrogates
    Hyunji Nam, Allen Nie, Ge Gao, Vasilis Syrgkanis, Emma Brunskill, Arxiv24
  2. Conditional Influence Functions
    Victor Chernozhukov, Whitney K. Newey, Vasilis Syrgkanis, Arxiv24
  3. Automatic Doubly Robust Forests
    Zhaomeng Chen, Junting Duan, Victor Chernozhukov, Vasilis Syrgkanis, Arxiv24
  4. Switchback Price Experiments with Forward-Looking Demand
    Yifan Wu, Ramesh Johari, Vasilis Syrgkanis, Gabriel Y. Weintraub, Arxiv24
  5. Personalized Adaptation via In-Context Preference Learning
    Allison Lau, Younwoo Choi, Vahid Balazadeh, Keertana Chidambaram, Vasilis Syrgkanis, Rahul G. Krishnan, Arxiv24
  6. Orthogonal Causal Calibration
    Justin Whitehouse, Christopher Jung, Vasilis Syrgkanis, Bryan Wilder, Zhiwei Steven Wu, Arxiv24
  7. Dynamic Local Average Treatment Effects
    Ravi Sojitra, Vasilis Syrgkanis, Arxiv24
  8. Simultaneous Inference for Local Structural Parameters with Random Forests
    David Ritzwoller, Vasilis Syrgkanis, Arxiv24
  9. Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
    Jikai Jin, Vasilis Syrgkanis, Arxiv24
  10. Regularized DeepIV with Model Selection
    Zihao Li, Hui Lan, Vasilis Syrgkanis, Mengdi Wang, Masatoshi Uehara, Arxiv24
  11. Taking a Moment for Distributional Robustness
    Jabari Hastings, Christopher Jung, Charlotte Peale, Vasilis Syrgkanis, Arxiv24
  12. Direct Preference Optimization With Unobserved Preference Heterogeneity
    Keertana Chidambaram, Karthik Vinay Seetharaman, Vasilis Syrgkanis, Arxiv24
  13. Learning Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity
    Jikai Jin, Vasilis Syrgkanis, NeurIPS24 (Spotlight)
  14. Consistency of Neural Causal Partial Identification
    Jiyuan Tan, Jose Blanchet, Vasilis Syrgkanis, NeurIPS24
  15. Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity, NeurIPS24
    Vahid Balazadeh, Keertana Chidambaram, Viet Nguyen, Rahul G. Krishnan, Vasilis Syrgkanis, NeurIPS24
  16. Causal Q-Aggregation for CATE Model Selection
    Hui Lan, Vasilis Syrgkanis, AISTATS24
  17. Adaptive Instrument Design for Indirect Experiments
    Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill, ICLR24
  18. Empirical Analysis of Model Selection for Heterogenous Causal Effect Estimation
    Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis, ICLR24

2023

  1. Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation via Incentivized Exploration
    Daniel Ngo, Keegan Harris, Anish Agarwal, Vasilis Syrgkanis, Zhiwei Steven Wu, Arxiv23
  2. Inference on Optimal Dynamic Policies via Softmax Approximation
    Qizhao Chen, Morgane Austern, Vasilis Syrgkanis, Arxiv23
  3. Automatic Debiased Machine Learning for Covariate Shifts
    Victor Chernozhukov, Michael Newey, Whitney K Newey, Rahul Singh, Vasilis Srygkanis, Arxiv23
  4. Post Reinforcement Learning Inference
    Ruohan Zhan, Vasilis Syrgkanis, Arxiv23, Operations Research (Revise & Resubmit)
  5. Source Condition Double Robust Inference on Functionals of Inverse Problems
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, Arxiv23
  6. Inference on Strongly Identified Functionals of Weakly Identified Functions
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, COLT23
  7. Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, COLT23

2022

  1. Synthetic Blip Effects: Generalizing Synthetic Controls for the Dynamic Treatment Regime
    Anish Agarwal, Vasilis Syrgkanis, Arxiv22
  2. Finding Subgroups with Significant Treatment Effects, CLear22
    Jann Spiess, Vasilis Syrgkanis, Victor Yaneng Wang, CLear22
  3. Non-Parametric Inference Adaptive to Intrinsic Dimension
    Khashayar Khosravi, Gregory Lewis, Vasilis Syrgkanis, CLear22
  4. Towards efficient representation identification in supervised learning Kartik Ahuja, Divyat Mahajan, Ioannis Mitliagkas, Vasilis Syrgkanis, CLear22
  5. Regularized Orthogonal Machine Learning for Nonlinear Semiparametric Models
    Denis Nekipelov, Vira Semenova, Vasilis Syrgkanis, The Econometrics Journal, 2022
  6. Automatic Debiased Machine Learning for Dynamic Treatment Effects
    Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis, Arxiv22
  7. Robust Generalized Method of Moments: A Finite Sample Viewpoint, NeurIPS22
    Dhruv Rohatgi, Vasilis Syrgkanis, NeurIPS22
  8. Debiased Machine Learning without Sample-Splitting for Stable Estimators
    Qizhao Chen, Vasilis Syrgkanis, Morgane Austern, NeurIPS22
  9. Partial Identification of Treatment Effects with Implicit Generative Models
    Vahid Balazadeh Meresht, Vasilis Syrgkanis, Rahul G Krishnan, NeurIPS22 (Spotlight)
  10. RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests
    Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis, ICML22 (Long Oral)

2021

  1. Long Story Short: Omitted Variable Bias in Causal Machine Learning
    Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis, Arxiv21, Review of Economics and Statistics (Revise & Resubmit)
  2. Automatic Debiased Machine Learning via Riesz Regression
    Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis, Arxiv21
  3. Adversarial Estimation of Reisz Representers
    Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis, Arxiv21
  4. Estimating the Long-Term Effects of Novel Treatments
    Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Miruna Oprescu, Vasilis Syrgkanis, NeurIPS21
  5. Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation
    Greg Lewis, Vasilis Syrgkanis, NeurIPS21
  6. Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection
    Morgane Austern, Vasilis Syrgkanis, NeurIPS2021
  7. DoWhy: Addressing Challenges in Expressing and Validating Causal Assumptions
    Amit Sharma, Vasilis Syrgkanis, Cheng Zhang, Emre Kiciman, ICML21 Workshop on the Neglected Assumptions in Causal Inference
  8. Dynamically Aggregating Diverse Information
    Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis, EC2021 and Econometrica, 2021
  9. Incentivizing Compliance with Algorithmic Instruments, ICML21
    Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu, ICML21
  10. Knowledge Distillation as Semi-Parametric Inference
    Tri Dao, Govinda Kamath, Vasilis Syrgkanis, Lester Mackey, ICLR21

2020

  1. Estimation and Inference with Trees and Forests in High Dimensions
    Vasilis Syrgkanis, Manolis Zampetakis, COLT20
  2. Minimax Estimation of Conditional Moment Models
    Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis, NeurIPS20

2019

  1. Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments
    Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis, NeurIPS19 (spotlight)
  2. Semi-Parametric Efficient Policy Learning with Continuous Actions
    Mert Demirer, Vasilis Syrgkanis, Greg Lewis, Victor Chernozhukov, NeurIPS19
  3. Low-rank Bandit Methods for High-dimensional Dynamic Pricing
    Jonas Mueller, Vasilis Syrgkanis, Matt Taddy, NeurIPS19
  4. Orthogonal Statistical Learning
    Dylan Foster, Vasilis Syrgkanis, COLT19 (Best Paper Award) and Annals of Statistics 2022
  5. Orthogonal Random Forest for Causal Inference
    Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu, ICML19

2018

  1. Semiparametric Contextual Bandits
    Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis, ICML18
  2. Accurate Inference for Adaptive Linear Models
    Yash Deshpande, Lester Mackey, Vasilis Syrgkanis, Matt Taddy, ICML18
  3. Optimal Data Acquisition for Statistical Estimation
    Yiling Chen, Nicole Immorlica, Brendan Lucier, Vasilis Syrgkanis, Juba Ziani, EC18
  4. Orthogonal Machine Learning: Power and Limitations
    Lester Mackey, Vasilis Syrgkanis, Ilias Zadik, ICML18
Heterogeneous Effects

Heterogeneous Effects: Estimation, Inference and Policy Learning

  1. A Meta-learner for Heterogeneous Effects in Difference-in-Differences
    Hui Lan, Haoge Chang, Eleanor Dillon, Vasilis Syrgkanis, Arxiv25
  2. Conditional Influence Functions
    Victor Chernozhukov, Whitney K. Newey, Vasilis Syrgkanis, Arxiv24
  3. Automatic Doubly Robust Forests
    Zhaomeng Chen, Junting Duan, Victor Chernozhukov, Vasilis Syrgkanis, Arxiv24
  4. Orthogonal Causal Calibration
    Justin Whitehouse, Christopher Jung, Vasilis Syrgkanis, Bryan Wilder, Zhiwei Steven Wu, Arxiv24
  5. Simultaneous Inference for Local Structural Parameters with Random Forests
    Ritzwoller, Syrgkanis, Arxiv24
  6. Causal Q-Aggregation for CATE Model Selection
    Hui Lan, Vasilis Syrgkanis, AISTATS24
  7. Empirical Analysis of Model Selection for Heterogenous Causal Effect Estimation
    Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis, ICLR24
  8. Finding Subgroups with Significant Treatment Effects, CLear22
    Jann Spiess, Vasilis Syrgkanis, Victor Yaneng Wang, CLear22
  9. Non-Parametric Inference Adaptive to Intrinsic Dimension
    Khashayar Khosravi, Gregory Lewis, Vasilis Syrgkanis, CLear22
  10. Regularized Orthogonal Machine Learning for Nonlinear Semiparametric Models
    Denis Nekipelov, Vira Semenova, Vasilis Syrgkanis, The Econometrics Journal, 2022
  11. Estimation and Inference with Trees and Forests in High Dimensions
    Vasilis Syrgkanis, Manolis Zampetakis, COLT20
  12. Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments
    Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis, NeurIPS19 (spotlight)
  13. Semi-Parametric Efficient Policy Learning with Continuous Actions
    Mert Demirer, Vasilis Syrgkanis, Greg Lewis, Victor Chernozhukov, NeurIPS19
  14. Orthogonal Statistical Learning
    Dylan Foster, Vasilis Syrgkanis, COLT19 (Best Paper Award) and Annals of Statistics 2022
  15. Orthogonal Random Forest for Causal Inference
    Miruna Oprescu, Vasilis Syrgkanis, Zhiwei Steven Wu, ICML19
Debiased ML

Debiased Machine Learning

  1. Detecting clinician implicit biases in diagnoses using proximal causal inference
    Kara Liu, Russ Altman, Vasilis Syrgkanis, Pacific Symposium on Biocomputing 2025
  2. Predicting Long Term Sequential Policy Value Using Softer Surrogates
    Hyunji Nam, Allen Nie, Ge Gao, Vasilis Syrgkanis, Emma Brunskill, Arxiv24
  3. Source Condition Double Robust Inference on Functionals of Inverse Problems
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, Arxiv23
  4. Automatic Debiased Machine Learning for Covariate Shifts
    Victor Chernozhukov, Michael Newey, Whitney K Newey, Rahul Singh, Vasilis Srygkanis, Arxiv23
  5. Automatic Debiased Machine Learning for Dynamic Treatment Effects
    Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis, Arxiv22
  6. Automatic Debiased Machine Learning via Riesz Regression
    Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis, Arxiv21
  7. Adversarial Estimation of Reisz Representers
    Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis, Arxiv21
  8. Inference on Strongly Identified Functionals of Weakly Identified Functions
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, COLT23
  9. Debiased Machine Learning without Sample-Splitting for Stable Estimators
    Qizhao Chen, Vasilis Syrgkanis, Morgane Austern, NeurIPS22
  10. RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests
    Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis, ICML22 (Long Oral)
  11. Knowledge Distillation as Semi-Parametric Inference
    Tri Dao, Govinda Kamath, Vasilis Syrgkanis, Lester Mackey, ICLR21
  12. Orthogonal Machine Learning: Power and Limitations
    Lester Mackey, Vasilis Syrgkanis, Ilias Zadik, ICML18
Non Parametric Instrumental Variables

Non-Parametric Instrumental Variables

  1. Regularized DeepIV with Model Selection
    Zihao Li, Hui Lan, Vasilis Syrgkanis, Mengdi Wang, Masatoshi Uehara, Arxiv24
  2. Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness
    Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara, COLT23
  3. Minimax Estimation of Conditional Moment Models
    Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis, NeurIPS20
  4. Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments
    Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis, NeurIPS19 (spotlight)
Adaptive Experiments

Adaptive Experiments

  1. Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity, NeurIPS24
    Vahid Balazadeh, Keertana Chidambaram, Viet Nguyen, Rahul G. Krishnan, Vasilis Syrgkanis, NeurIPS24
  2. Post Reinforcement Learning Inference
    Ruohan Zhan, Vasilis Syrgkanis, Operations Research (Revise & Resubmit)
  3. Adaptive Instrument Design for Indirect Experiments
    Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill, ICLR24
  4. Dynamically Aggregating Diverse Information
    Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis, EC2021 and Econometrica, 2021
  5. Low-rank Bandit Methods for High-dimensional Dynamic Pricing
    Jonas Mueller, Vasilis Syrgkanis, Matt Taddy, NeurIPS19
  6. Semiparametric Contextual Bandits
    Akshay Krishnamurthy, Zhiwei Steven Wu, Vasilis Syrgkanis, ICML18
  7. Accurate Inference for Adaptive Linear Models
    Yash Deshpande, Lester Mackey, Vasilis Syrgkanis, Matt Taddy, ICML18
Dynamic Treatment Regimes

Dynamic Treatment Regimes

  1. Dynamic Local Average Treatment Effects
    Ravi Sojitra, Vasilis Syrgkanis, Arxiv24
  2. Inference on Optimal Dynamic Policies via Softmax Approximation
    Qizhao Chen, Morgane Austern, Vasilis Syrgkanis, Arxiv23
  3. Synthetic Blip Effects: Generalizing Synthetic Controls for the Dynamic Treatment Regime
    Anish Agarwal, Vasilis Syrgkanis, Arxiv22
  4. Automatic Debiased Machine Learning for Dynamic Treatment Effects
    Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis, Arxiv22
  5. Estimating the Long-Term Effects of Novel Treatments
    Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Miruna Oprescu, Vasilis Syrgkanis, NeurIPS21
  6. Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation
    Greg Lewis, Vasilis Syrgkanis, NeurIPS21
Neural Causal Models

Neural Causal Models

  1. Consistency of Neural Causal Partial Identification
    Jiyuan Tan, Jose Blanchet, Vasilis Syrgkanis, NeurIPS24
  2. Partial Identification of Treatment Effects with Implicit Generative Models
    Vahid Balazadeh Meresht, Vasilis Syrgkanis, Rahul G Krishnan, NeurIPS22 (Spotlight)
Causal Representation Learning

Causal Representation Learning

  1. Learning Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity
    Jikai Jin, Vasilis Syrgkanis, NeurIPS24 (Spotlight)
  2. Towards efficient representation identification in supervised learning Kartik Ahuja, Divyat Mahajan, Ioannis Mitliagkas, Vasilis Syrgkanis, CLear22
Sensitivity Analysis

Sensitivity Analysis

  1. Long Story Short: Omitted Variable Bias in Causal Machine Learning
    Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis, Review of Economics and Statistics (Revise & Resubmit)
  2. DoWhy: Addressing Challenges in Expressing and Validating Causal Assumptions
    Amit Sharma, Vasilis Syrgkanis, Cheng Zhang, Emre Kiciman, ICML21 Workshop on the Neglected Assumptions in Causal Inference
Minimax Lower Bounds for Causal Estimation

Minimax Lower Bounds for Causal Estimation

  1. Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
    Jikai Jin, Vasilis Syrgkanis, Arxiv24
Robust Statistics and Causal Inference

Robust Statistics and Causal Inference

  1. Taking a Moment for Distributional Robustness
    Jabari Hastings, Christopher Jung, Charlotte Peale, Vasilis Syrgkanis, Arxiv24
  2. Robust Generalized Method of Moments: A Finite Sample Viewpoint, NeurIPS22
    Dhruv Rohatgi, Vasilis Syrgkanis, NeurIPS22
Causal Inference and Incentives

Causal Inference and Incentives

  1. Switchback Price Experiments with Forward-Looking Demand
    Yifan Wu, Ramesh Johari, Vasilis Syrgkanis, Gabriel Y. Weintraub, Arxiv24
  2. Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation via Incentivized Exploration
    Daniel Ngo, Keegan Harris, Anish Agarwal, Vasilis Syrgkanis, Zhiwei Steven Wu, Arxiv23
  3. Incentivizing Compliance with Algorithmic Instruments, ICML21
    Daniel Ngo, Logan Stapleton, Vasilis Syrgkanis, Zhiwei Steven Wu, ICML21
  4. Optimal Data Acquisition for Statistical Estimation
    Yiling Chen, Nicole Immorlica, Brendan Lucier, Vasilis Syrgkanis, Juba Ziani, EC18
Learning from Experts

Learning from Experts

  1. Personalized Adaptation via In-Context Preference Learning
    Allison Lau, Younwoo Choi, Vahid Balazadeh, Keertana Chidambaram, Vasilis Syrgkanis, Rahul G. Krishnan, Arxiv24
  2. Direct Preference Optimization With Unobserved Preference Heterogeneity
    Keertana Chidambaram, Karthik Vinay Seetharaman, Vasilis Syrgkanis, Arxiv24
  3. Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity
    Vahid Balazadeh, Keertana Chidambaram, Viet Nguyen, Rahul G. Krishnan, Vasilis Syrgkanis, NeurIPS24

Contact Us

  • Huang 252, Huang Engineering Center, Stanford, CA 94305

    vsyrgk@stanford.edu