Berkant Turan

ML Researcher @ IOL Lab at ZIB | PhD Candidate @ TU Berlin | Member @ Berlin Mathematical School

profile.jpg

I am a fourth-year PhD candidate at the Institute of Mathematics, TU Berlin and a research associate at Zuse Institute Berlin (ZIB). Under the supervision of Prof. Sebastian Pokutta, I am currently part of the Interactive Optimization and Learning (IOL) research group at ZIB. Additionally, I am a member of the Berlin Mathematical School (BMS), which is part of the Math+ Excellence Cluster.

My research is centered on Prover-Verifier Games for Trustworthy Machine Learning. I develop interactive, multi-agent models (such as Merlin-Arthur classifiers and concept-based verifiers) that provide provable interpretability guarantees: the evidence used for a prediction can be checked and linked formally to the model’s decision. I am also interested in the theoretical limits of model security—including adversarial robustness, backdoor-based watermarks, and transferable attacks—and their connections to cryptography.

A second line of my work is AI-based methods for Earth observation, in particular high-resolution forest monitoring. As part of the AI4Forest project at ZIB, I develop scalable ML pipelines for canopy height estimation and biomass mapping from satellite data (e.g. Sentinel-1/2 and GEDI), supporting large-scale ecological monitoring and climate research.

Before starting my PhD, I worked on Deep Hybrid Discriminative-Generative Modeling, investigating Variational Autoencoders and Residual Networks for out-of-distribution detection, robustness, and calibration in computer vision.

news

09/2025 Thrilled to announce that The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses has been accepted at NeurIPS 2025! Huge thanks to my collaborators Grzegorz Głuch, Sai Ganesh Nagarajan, and Sebastian Pokutta.
06/2025 Happy to share that Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings has been accepted at ICML 2025 Workshop on Actionable Interpretability! Grateful to collaborate with S. Asadulla, D. Steinmann, K. Kersting, W. Stammer, and S. Pokutta on advancing interpretable AI through formal verification methods.
05/2025 Excited to announce that our collaborative work on Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation has been accepted at ICML 2025! Many thanks to my amazing collaborators J. Pauls, M. Zimmer, S. Saatchi, P. Ciais, S. Pokutta, and F. Gieseke for this interdisciplinary project bridging machine learning and environmental science.
03/2025 Great news! The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses has been accepted at ICLR 2025 Workshop on GenAI Watermarking.
10/2024 Excited to announce that The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses is now available on arXiv! Many thanks to my collaborators, Grzegorz Głuch (EPFL at the time), Sai Ganesh Nagarajan (ZIB) and Sebastian Pokutta (ZIB), for their contributions to this project!

selected publications

  1. ncv_thumbnail.png
    Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings
    Berkant Turan, Suhrab Asadulla, David Steinmann, Kristian Kersting, and 2 more authors
    In Actionable Interpretability Workshop at ICML, 2025
  2. ai4forest_icml25.png
    Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation
    Jan Pauls*, Max Zimmer*, Berkant Turan*, Sassan Saatchi, and 3 more authors
    In International Conference on Machine Learning (ICML), 2025
    * Equal contribution
  3. gluch2024_goodbadugly.png
    The Good, the Bad and the Ugly: Meta-Analysis of Watermarks, Transferable Attacks and Adversarial Defenses
    Grzegorz Głuch, Berkant Turan, Sai Ganesh Nagarajan, and Sebastian Pokutta
    In Conference on Neural Information Processing Systems (NeurIPS), 2025
  4. icml2024_poster.png
    Unified Taxonomy in AI Safety: Watermarks, Adversarial Defenses, and Transferable Attacks
    Grzegorz Gluch, Sai Ganesh Nagarajan, and Berkant Turan
    ICML 2024 Workshop on Theoretical Foundations of Foundation Models (TF2M), 2024
  5. merlin-arthur-classifier.jpeg
    Interpretability Guarantees with Merlin-Arthur Classifiers
    Stephan Wäldchen, Kartikey Sharma, Berkant Turan, Max Zimmer, and 1 more author
    In Proceedings of The 27th International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, 2024
  6. world_conference_on_explainable_artificial_intelligence_logo.jpeg
    Extending Merlin-Arthur Classifiers for Improved Interpretability
    Berkant Turan
    In Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Jul 2023
    (Best Proposal Award)