I am specialized in natural language processing (NLP), causal inference and probabilistic modeling.
My academic background is in philosophy of language and experimental psychology. Before starting my career in data science, I used to work as a music producer and mixing engineer.
I currently work on a book on Causal Infernece and Discovery in Python (availability H1 2023).
Talks & Workshops
I am extremely grateful to all the people who shared their knowledge and experience with me. It's very important to me to give back what I got from others. That's why I love to share my knowledge and experience with others.
Interested in NLP, causal or probabilistic modeling? Join me on one of the upcoming events!
Interested in organizing a training session for your company? Click here!
KI Fabrigk Konferenz, Ingolstadt (Jul 1, 2022)
Data Science Summit ML Edition, Warsaw (Jun 22, 2022)
GHOST Day Applied ML Conf 2022 (Mar 24, 2022)
Data Science Summit (Dec 3, 2021)
NLP & AI Day 2021 (Oct 26, 2021)
Training sessions for your team
…and you can read them for free if you want! ❤️
Kevin Murphy's Probabilistic Machine Learning: An Introduction, Bayesian Modeling and Computation in Python by Osvaldo Martin and colleagues and Deep Learning on Graphs. What are they about and what to expect inside?Read More
Part 4: Going fully probabilisticRead More
Part 3: Epistemic uncertaintyRead More
Part 2: Aleatoric uncertaintyRead More
Sunday AI Papers
Last Thursday researchers from University of Wisconsin-Madison released a brand new paper analyzing the robustness of ViT model to spurious correlations.Read More
Last Tuesday researchers form Microsoft Research released a new paper introducing a new method that allows to build 𝗲𝘅𝘁𝗿𝗲𝗺𝗲𝗹𝘆 𝗱𝗲𝗲𝗽 Transformer models.Read More
Last Monday researchers form Cornell University and Google Brain released a new paper presenting 𝗙𝗟𝗔𝗦𝗛 - a novel efficient modification of Transformer architecture.Read More
On Thursday, Leslie N. Smith of U.S. Naval Research Laboratory released his new paper on general cyclical training. Sound familiar? Maybe, but don't get misled!Read More
On the first Friday of February, researchers from Microsoft Research, University of Cambridge, University of Massachusetts Amherst and G-Research released a paper describing a novel method for end-to-end causal process.Read More
Last Monday, researchers from University of Pennsylvania released a paper proposing a new framework to perform 𝗰𝗼𝗺𝗺𝗼𝗻𝘀𝗲𝗻𝘀𝗲 𝗰𝗮𝘂𝘀𝗮𝗹𝗶𝘁𝘆 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 (𝗖𝗖𝗥).Read More
Last Saturday, researchers from UC San Diego and Amazon AI released a paper describing a novel approach to conditional text generation that leverages causal inference principles to mitigate the effects of spurious correlations.Read More
Last Wednesday, researchers from The University of Hong Kong, National University of Defense Technology and SenseTime 商汤科技 released a paper proposing a new contrastive sentence embedding framework called 𝗦𝗡𝗖𝗦𝗘.Read More
On the last Monday of December, researchers from University of Amsterdam and Amazon released a paper introducing a novel uncertainty estimation method for Transformers.Read More
Last Wednesday, researchers from DeepMind released a paper describing a novel approach to RL-agent training that makes the agents more robust. Agents were able to learn more generalizable abstractions thanks to... explaining their decisions.Read More
Last Tuesday, researchers from Huawei Noah's Ark Lab and University of Toronto released a new causal discovery package and an accompanying paper. It brings some really cool features to the table!Read More
Last Sunday, researchers from University of Chicago and Carnegie Mellon University released a paper proposing a novel method of discovering a causal graph with latent variables, but the problem is hard.Read More
Last Thursday, researchers from Intel Labs and University of California, Santa Barbara proposed a new approach to model distillation. The proposed architecture achieves very good trade-off between 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝘁𝗶𝗺𝗲 𝗿𝗲𝗱𝘂𝗰𝘁𝗶𝗼𝗻 and 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆.Read More
Last Tuesday, researchers from University of Cambridge, Amazon Web Services (AWS) AI and Monash University released a paper introducing a new pre-training approach leveraging contrastive loss scheme.Read More
Last Thursday researchers from Eindhoven University of Technology released a paper describing a framework for automated string preprocessing and encoding. The framework leverages probabilistic type inference among other interesting components.Read More