About me

CV ~ LinkedIn ~ Google Scholar ~ GitHub

My name is Jan Malte Lichtenberg and I'm a Principal Applied Scientist at Albatross AI in Berlin. I build agentic recommender systems and multi-modal search systems at scale, with the goal of making recommender systems more steerable for the end user.

Previously I was a Senior Applied Scientist at Amazon Music, where I worked on the fascinating topic of personalizing music recommendations. I developed an LLM-based music recommender system called Maestro and was the project's science lead from inception to launch. I also worked on offline policy evaluation and learning how to rank items from different content types.

I completed my PhD at the University of Bath, working on bounded rationality in reinforcement learning with Özgür Şimşek. Before that I worked at the Center for Adaptive Behaviour and Cognition, directed by Gerd Gigerenzer, at the Max Planck Institute for Human Development in Berlin. In my research, I try to inform machine learning models with insights from human decision making research, in particular from the study of simple decision heuristics.

Publications

DenseRec: Revisiting Dense Content Embeddings for
Sequential Transformer-based Recommendation
Jan Malte Lichtenberg, Alessandro De Candia, and Matteo Ruffini
EARL@RecSys, 2025 [pdf]
Ranking Across Different Content Types:
The Robust Beauty of Multinomial Blending
Jan Malte Lichtenberg, Giuseppe Di Benedetto, and Matteo Ruffini
Proceedings of the 18th ACM Conference on Recommender Systems, 2024 [pdf]
Counterfactual Ranking Evaluation with Flexible Click Models
Alexander Buchholz, Ben London, Giuseppe Di Benedetto, Jan Malte Lichtenberg, Yannik Stein, and Thorsten Joachims
Proceedings of the 47th International ACM SIGIR Conference, 2024 [pdf]
Large Language Models as Recommender Systems:
A Study of Popularity Bias
Jan Malte Lichtenberg, Alexander Buchholz, and Pola Schwöbel
Gen-IR@SIGIR, 2024 [pdf]
Double Clipping: Less-Biased Variance Reduction
in Off-Policy Evaluation
Jan Malte Lichtenberg, Alexander Buchholz, Giuseppe Di Benedetto, Matteo Ruffini, and Ben London
CONSEQUENCES@RecSys, 2023 [pdf]
Self-normalized Off-Policy Estimators for Ranking
Ben London, Alexander Buchholz, Giuseppe Di Benedetto, Jan Malte Lichtenberg, Yannik Stein, and Thorsten Joachims
CONSEQUENCES@RecSys, 2023
Contextual Position Bias Estimation Using
a Single Stochastic Logging Policy
Giuseppe Di Benedetto, Alexander Buchholz, Ben London, Matej Jakimov, Yannik Stein, Jan Malte Lichtenberg, Vito Bellini, Matteo Ruffini, and Thorsten Joachims
LERI@RecSys, 2023
Bounded Rationality in Reinforcement Learning
Jan Malte Lichtenberg
University of Bath, UK, 2023 [pdf]
Low-variance Estimation in the Plackett-Luce Model
via Quasi-Monte Carlo Sampling
Alexander Buchholz, Jan Malte Lichtenberg, Giuseppe Di Benedetto, Yannik Stein, Vito Bellini, and Matteo Ruffini
SIGIR 2022 Workshop on Reaching Efficiency in Neural Information Retrieval, 2022 [pdf]
Regularization in Directable Environments with Application to Tetris
Jan Malte Lichtenberg and Özgür Şimşek
International Conference on Machine Learning (ICML), 2019
[pdf]  [code]
Iterative Policy Space Expansion for
Reinforcement Learning
Jan Malte Lichtenberg and Özgür Şimşek
NeurIPS workshop on Biological and Artificial Reinforcement Learning, 2019 [pdf]
Simple Regression Models,
Jan Malte Lichtenberg and Özgür Şimşek
Imperfect Decision Makers: Admitting Real-World Rationality, PMLR, 2017 [pdf]

About this website

This website is made using gatsby.js and react.js. It is hosted on netlify.com. Sidenotes and marginnotes use tufte-css. All visualisations are made using p5.js. Help from Corey Gouker in making p5.js and gatsby.js work well together is much appreciated.

© 2021