06/2025 | Happy to share that Neural Concept Verifier: Scaling Prover-Verifier Games Via Concept Encodings has been accepted at ICML 2025 Workshop on Actionable Interpretability! Grateful to collaborate with S. Asadulla, D. Steinmann, W. Stammer, and S. Pokutta on advancing interpretable AI through formal vericfiation methods. |
05/2025 | Excited to announce that our collaborative work on Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation has been accepted at ICML 2025! Many thanks to my amazing collaborators J. Pauls, M. Zimmer, S. Saatchi, P. Ciais, S. Pokutta, and F. Gieseke for this interdisciplinary project bridging machine learning and environmental science. |
03/2025 | Great news! The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses has been accepted at ICLR 2025 Workshop on GenAI Watermarking. |
10/2024 | Excited to announce that The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses is now available on arXiv! Many thanks to my collaborators, Grzegorz Głuch (EPFL at the time), Sai Ganesh Nagarajan (ZIB) and Sebastian Pokutta (ZIB), for their contributions to this project! |
06/2024 | Unified Taxonomy of AI Safety: Watermarks, Adversarial Defenses and Transferable Attacks got accepted at ICML 2024 Workshop on Theoretical Foundations of Foundation Models. See you in Vienna! |
03/2024 | Our recent paper, Interpretability Guarantees with Merlin-Arthur Classifiers, has been accepted at AISTATS 2024. Looking forward to meeting you in Valencia. |
07/2023 | I received the Best Proposal Award at the xAI-2023 Doctoral Consortium in Lisbon for my research on Extending Merlin-Arthur Classifiers for Improved Interpretability. Thank you to the reviewers and organizers! |
09/2022 | Excited to have started my PhD at TU Berlin and the Zuse Institute Berlin in the Interactive Optimization and Learning research lab, under the supervision of Sebastian Pokutta. |