エピソード

  • #22: Pinterest Homefeed and Ads Ranking with Prabhat Agarwal and Aayush Mudgal
    2024/06/06
    In episode 22 of Recsperts, we welcome Prabhat Agarwal, Senior ML Engineer, and Aayush Mudgal, Staff ML Engineer, both from Pinterest, to the show. Prabhat works on recommendations and search systems at Pinterest, leading representation learning efforts. Aayush is responsible for ads ranking and privacy-aware conversion modeling. We discuss user and content modeling, short- vs. long-term objectives, evaluation as well as multi-task learning and touch on counterfactual evaluation as well.In our interview, Prabhat guides us through the journey of continuous improvements of Pinterest's Homefeed personalization starting with techniques such as gradient boosting over two-tower models to DCN and transformers. We discuss how to capture users' short- and long-term preferences through multiple embeddings and the role of candidate generators for content diversification. Prabhat shares some details about position debiasing and the challenges to facilitate exploration.With Aayush we get the chance to dive into the specifics of ads ranking at Pinterest and he helps us to better understand how multifaceted ads can be. We learn more about the pain of having too many models and the Pinterest's efforts to consolidate the model landscape to improve infrastructural costs, maintainability, and efficiency. Aayush also shares some insights about exploration and corresponding randomization in the context of ads and how user behavior is very different between different kinds of ads.Both guests highlight the role of counterfactual evaluation and its impact for faster experimentation.Towards the end of the episode, we also touch a bit on learnings from last year's RecSys challenge.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction(03:51) - Guest Introductions(09:57) - Pinterest Introduction(21:57) - Homefeed Personalization(47:27) - Ads Ranking(01:14:58) - RecSys Challenge 2023(01:20:26) - Closing RemarksLinks from the Episode:Prabhat Agarwal on LinkedInAayush Mudgal on LinkedInRecSys Challenge 2023Pinterest Engineering BlogPinterest LabsPrabhat's Talk at GTC 2022: Evolution of web-scale engagement modeling at PinterestBlogpost: How we use AutoML, Multi-task learning and Multi-tower models for Pinterest AdsBlogpost: Pinterest Home Feed Unified Lightweight Scoring: A Two-tower ApproachBlogpost: Experiment without the wait: Speeding up the iteration cycle with Offline Replay ExperimentationBlogpost: MLEnv: Standardizing ML at Pinterest Under One ML Engine to Accelerate InnovationPapers:Eksombatchai et al. (2018): Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-TimeYing et al. (2018): Graph Convolutional Neural Networks for Web-Scale Recommender SystemsPal et al. (2020): PinnerSage: Multi-Modal User Embedding Framework for Recommendations at PinterestPancha et al. (2022): PinnerFormer: Sequence Modeling for User Representation at PinterestZhao et al. (2019): Recommending what video to watch next: a multitask ranking systemGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
    続きを読む 一部表示
    1 時間 24 分
  • #21: User-Centric Evaluation and Interactive Recommender Systems with Martijn Willemsen
    2024/04/08

    In episode 21 of Recsperts, we welcome Martijn Willemsen, Associate Professor at the Jheronimus Academy of Data Science and Eindhoven University of Technology. Martijn's researches on interactive recommender systems which includes aspects of decision psychology and user-centric evaluation. We discuss how users gain control over recommendations, how to support their goals and needs as well as how the user-centric evaluation framework fits into all of this.

    In our interview, Martijn outlines the reasons for providing users control over recommendations and how to holistically evaluate the satisfaction and usefulness of recommendations for users goals and needs. We discuss the psychology of decision making with respect to how well or not recommender systems support it. We also dive into music recommender systems and discuss how nudging users to explore new genres can work as well as how longitudinal studies in recommender systems research can advance insights.

    Towards the end of the episode, Martijn and I also discuss some examples and the usefulness of enabling users to provide negative explicit feedback to the system.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
    Don't forget to follow the podcast and please leave a review

    • (00:00) - Introduction
    • (03:03) - About Martijn Willemsen
    • (15:14) - Waves of User-Centric Evaluation in RecSys
    • (19:35) - Behaviorism is not Enough
    • (46:21) - User-Centric Evaluation Framework
    • (01:05:38) - Genre Exploration and Longitudinal Studies in Music RecSys
    • (01:20:59) - User Control and Negative Explicit Feedback
    • (01:31:50) - Closing Remarks

    Links from the Episode:
    • Martijn Willemsen on LinkedIn
    • Martijn Willemsen's Website
    • User-centric Evaluation Framework
    • Behaviorism is not Enough (Talk at RecSys 2016)
    • Neil Hunt: Quantifying the Value of Better Recommendations (Keynote at RecSys 2014)
    • What recommender systems can learn from decision psychology about preference elicitation and behavioral change (Talk at Boise State (Idaho) and Grouplens at University of Minnesota)
    • Eric J. Johnson: The Elements of Choice
    • Rasch Model
    • Spotify Web API

    Papers:

    • Ekstrand et al. (2016): Behaviorism is not Enough: Better Recommendations Through Listening to Users
    • Knijenburg et al. (2012): Explaining the user experience of recommender systems
    • Ekstrand et al. (2014): User perception of differences in recommender algorithms
    • Liang et al. (2022): Exploring the longitudinal effects of nudging on users’ music genre exploration behavior and listening preferences
    • McNee et al. (2006): Being accurate is not enough: how accuracy metrics have hurt recommender systems

    General Links:

    • Follow me on LinkedIn
    • Follow me on X
    • Send me your comments, questions and suggestions to marcel.kurovski@gmail.com
    • Recsperts Website
    続きを読む 一部表示
    1 時間 36 分
  • #20: Practical Bandits and Travel Recommendations with Bram van den Akker
    2023/11/16

    In episode 20 of Recsperts, we welcome Bram van den Akker, Senior Machine Learning Scientist at Booking.com. Bram's work focuses on bandit algorithms and counterfactual learning. He was one of the creators of the Practical Bandits tutorial at the World Wide Web conference. We talk about the role of bandit feedback in decision making systems and in specific for recommendations in the travel industry.

    In our interview, Bram elaborates on bandit feedback and how it is used in practice. We discuss off-policy- and on-policy-bandits, and we learn that counterfactual evaluation is right for selecting the best model candidates for downstream A/B-testing, but not a replacement. We hear more about the practical challenges of bandit feedback, for example the difference between model scores and propensities, the role of stochasticity or the nitty-gritty details of reward signals. Bram also shares with us the challenges of recommendations in the travel domain, where he points out the sparsity of signals or the feedback delay.

    At the end of the episode, we can both agree on a good example for a clickbait-heavy news service in our phones.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
    Don't forget to follow the podcast and please leave a review

    • (00:00) - Introduction
    • (02:58) - About Bram van den Akker
    • (09:16) - Motivation for Practical Bandits Tutorial
    • (16:53) - Specifics and Challenges of Travel Recommendations
    • (26:19) - Role of Bandit Feedback in Practice
    • (49:13) - Motivation for Bandit Feedback
    • (01:00:54) - Practical Start for Counterfactual Evaluation
    • (01:06:33) - Role of Business Rules
    • (01:11:26) - better cut this section coherently
    • (01:17:48) - Rewards and More
    • (01:32:45) - Closing Remarks

    Links from the Episode:
    • Bram van den Akker on LinkedIn
    • Practical Bandits: An Industry Perspective (Website)
    • Practical Bandits: An Industry Perspective (Recording)
    • Tutorial at The Web Conference 2020: Unbiased Learning to Rank: Counterfactual and Online Approaches
    • Tutorial at RecSys 2021: Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent Advances
    • GitHub: Open Bandit Pipeline

    Papers:

    • van den Akker et al. (2023): Practical Bandits: An Industry Perspective
    • van den Akker et al. (2022): Extending Open Bandit Pipeline to Simulate Industry Challenges
    • van den Akker et al. (2019): ViTOR: Learning to Rank Webpages Based on Visual Features

    General Links:

    • Follow me on LinkedIn
    • Follow me on X
    • Send me your comments, questions and suggestions to marcel.kurovski@gmail.com
    • Recsperts Website
    続きを読む 一部表示
    1 時間 45 分
  • #19: Popularity Bias in Recommender Systems with Himan Abdollahpouri
    2023/10/12

    In episode 19 of Recsperts, we welcome Himan Abdollahpouri who is an Applied Research Scientist for Personalization & Machine Learning at Spotify. We discuss the role of popularity bias in recommender systems which was the dissertation topic of Himan. We talk about multi-objective and multi-stakeholder recommender systems as well as the challenges of music and podcast streaming personalization at Spotify.

    In our interview, Himan walks us through popularity bias as the main cause of unfair recommendations for multiple stakeholders. We discuss the consumer- and provider-side implications and how to evaluate popularity bias. Not the sheer existence of popularity bias is the major problem, but its propagation in various collaborative filtering algorithms. But we also learn how to counteract by debiasing the data, the model itself, or it's output. We also hear more about the relationship between multi-objective and multi-stakeholder recommender systems.

    At the end of the episode, Himan also shares the influence of popularity bias in music and podcast streaming at Spotify as well as how calibration helps to better cater content to users' preferences.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
    Don't forget to follow the podcast and please leave a review

    • (00:00) - Introduction
    • (04:43) - About Himan Abdollahpouri
    • (15:23) - What is Popularity Bias and why is it important?
    • (25:05) - Effect of Popularity Bias in Collaborative Filtering
    • (30:30) - Individual Sensitivity towards Popularity
    • (36:25) - Introduction to Bias Mitigation
    • (53:16) - Content for Bias Mitigation
    • (56:53) - Evaluating Popularity Bias
    • (01:05:01) - Popularity Bias in Music and Podcast Streaming
    • (01:08:04) - Multi-Objective Recommender Systems
    • (01:16:13) - Multi-Stakeholder Recommender Systems
    • (01:18:38) - Recommendation Challenges at Spotify
    • (01:35:16) - Closing Remarks

    Links from the Episode:
    • Himan Abdollahpouri on LinkedIn
    • Himan Abdollahpouri on X
    • Himan's Website
    • Himan's PhD Thesis on "Popularity Bias in Recommendation: A Multi-stakeholder Perspective"
    • 2nd Workshop on Multi-Objective Recommender Systems (MORS @ RecSys 2022)

    Papers:

    • Su et al. (2009): A Survey on Collaborative Filtering Techniques
    • Mehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender Systems
    • Abdollahpouri et al. (2021): User-centered Evaluation of Popularity Bias in Recommender Systems
    • Abdollahpouri et al. (2019): The Unfairness of Popularity Bias in Recommendation
    • Abdollahpouri et al. (2017): Controlling Popularity Bias in Learning-to-Rank Recommendation
    • Wasilewsi et al. (2016): Incorporating Diversity in a Learning to Rank Recommender System
    • Oh et al. (2011): Novel Recommendation Based on Personal Popularity Tendency
    • Steck (2018): Calibrated Recommendations
    • Abdollahpouri et al. (2023): Calibrated Recommendations as a Minimum-Cost Flow Problem
    • Seymen et al. (2022): Making smart recommendations for perishable and stockout products

    General Links:

    • Follow me on LinkedIn
    • Follow me on X
    • Send me your comments, questions and suggestions to marcel@recsperts.com
    • Recsperts Website
    続きを読む 一部表示
    1 時間 42 分
  • #18: Recommender Systems for Children and non-traditional Populations
    2023/08/17

    In episode 18 of Recsperts, we hear from Professor Sole Pera from Delft University of Technology. We discuss the use of recommender systems for non-traditional populations, with children in particular. Sole shares the specifics, surprises, and subtleties of her research on recommendations for children.

    In our interview, Sole and I discuss use cases and domains which need particular attention with respect to non-traditional populations. Sole outlines some of the major challenges like lacking public datasets or multifaceted criteria for the suitability of recommendations. The highly dynamic needs and abilities of children pose proper user modeling as a crucial part in the design and development of recommender systems. We also touch on how children interact differently with recommender systems and learn that trust plays a major role here.

    Towards the end of the episode, we revisit the different goals and stakeholders involved in recommendations for children, especially the role of parents. We close with an overview of the current research community.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
    Don't forget to follow the podcast and please leave a review

    • (00:00) - Introduction
    • (04:56) - About Sole Pera
    • (06:37) - Non-traditional Populations
    • (09:13) - Dedicated User Modeling
    • (25:01) - Main Application Domains
    • (40:16) - Lack of Data about non-traditional Populations
    • (47:53) - Data for Learning User Profiles
    • (57:09) - Interaction between Children and Recommendations
    • (01:00:26) - Goals and Stakeholders
    • (01:11:35) - Role of Parents and Trust
    • (01:17:59) - Evaluation
    • (01:26:59) - Research Community
    • (01:32:37) - Closing Remarks

    Links from the Episode:
    • Sole Pera on LinkedIn
    • Sole's Website
    • Children and Recommenders
    • KidRec 2022
    • People and Information Retrieval Team (PIReT)

    Papers:

    • Beyhan et al. (2023): Covering Covers: Characterization Of Visual Elements Regarding Sleeves
    • Murgia et al. (2019): The Seven Layers of Complexity of Recommender Systems for Children in Educational Contexts
    • Pera et al. (2019): With a Little Help from My Friends: User of Recommendations at School
    • Charisi et al. (2022): Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and Policy
    • Gómez et al. (2021): Evaluating recommender systems with and for children: towards a multi-perspective framework
    • Ng et al. (2018): Recommending social-interactive games for adults with autism spectrum disorders (ASD)

    General Links:

    • Follow me on LinkedIn
    • Follow me on Twitter
    • Send me your comments, questions and suggestions to marcel@recsperts.com
    • Recsperts Website
    続きを読む 一部表示
    1 時間 40 分
  • #17: Microsoft Recommenders and LLM-based RecSys with Miguel Fierro
    2023/06/15

    In episode 17 of Recsperts, we meet Miguel Fierro who is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. We talk about the Microsoft recommenders repository with over 15k stars on GitHub and discuss the impact of LLMs on RecSys. Miguel also shares his view of the T-shaped data scientist.

    In our interview, Miguel shares how he transitioned from robotics into personalization as well as how the Microsoft recommenders repository started. We learn more about the three key components: examples, library, and tests. With more than 900 tests and more than 30 different algorithms, this library demonstrates a huge effort of open-source contribution and maintenance. We hear more about the principles that made this effort possible and successful. Therefore, Miguels also shares the reasoning behind evidence-based design to put the users of microsoft-recommenders and their expectations first. We also discuss the impact that recent LLM-related innovations have on RecSys.

    At the end of the episode, Miguel explains the T-shaped data professional as an advice to stay competitive and build a champion data team. We conclude with some remarks regarding the adoption and ethical challenges recommender systems pose and which need further attention.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
    Don't forget to follow the podcast and please leave a review

    • (00:00) - Episode Overview
    • (03:34) - Introduction Miguel Fierro
    • (16:19) - Microsoft Recommenders Repository
    • (30:04) - Structure of MS Recommenders
    • (34:16) - Contributors to MS Recommenders
    • (37:10) - Scalability of MS Recommenders
    • (39:32) - Impact of LLMs on RecSys
    • (48:26) - T-shaped Data Professionals
    • (53:29) - Further RecSys Challenges
    • (59:28) - Closing Remarks

    Links from the Episode:
    • Miguel Fierro on LinkedIn
    • Miguel Fierro on Twitter
    • Miguel's Website
    • Microsoft Recommenders
    • McKinsey (2013): How retailers can keep up with consumers
    • Fortune (2012): Amazon's recommendation secret
    • RecSys 2021 Keynote by Max Welling: Graph Neural Networks for Knowledge Representation and Recommendation

    Papers:

    • Geng et al. (2022): Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)

    General Links:

    • Follow me on LinkedIn
    • Follow me on Twitter
    • Send me your comments, questions and suggestions to marcel@recsperts.com
    • Recsperts Website
    続きを読む 一部表示
    1 時間 3 分
  • #16: Fairness in Recommender Systems with Michael D. Ekstrand
    2023/05/17
    In episode 16 of Recsperts, we hear from Michael D. Ekstrand, Associate Professor at Boise State University, about fairness in recommender systems. We discuss why fairness matters and provide an overview of the multidimensional fairness-aware RecSys landscape. Furthermore, we talk about tradeoffs, methods and receive practical advice on how to get started with tackling unfairness.In our discussion, Michael outlines the difference and similarity between fairness and bias. We discuss several stages at which biases can enter the system as well as how bias can indeed support mitigating unfairness. We also cover the perspectives of different stakeholders with respect to fairness. We also learn that measuring fairness depends on the specific fairness concern one is interested in and that solving fairness universally is highly unlikely.Towards the end of the episode, we take a look at further challenges as well as how and where the upcoming RecSys 2023 provides a forum for those interested in fairness-aware recommender systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.(00:00) - Episode Overview(02:57) - Introduction Michael Ekstrand(17:08) - Motivation for Fairness-Aware Recommender Systems(25:45) - Overview and Definition of Fairness in RecSys(46:51) - Distributional and Representational Harm(53:59) - Relationship between Fairness and Bias(01:04:43) - Tradeoffs(01:13:36) - Methods and Metrics for Fairness(01:28:06) - Practical Advice for Tackling Unfairness(01:32:24) - Further Challenges(01:35:24) - RecSys 2023(01:38:29) - Closing RemarksLinks from the Episode:Michael Ekstrand on LinkedInMichael Ekstrand on MastodonMichael's WebsiteGroupLens Lab at University of MinnesotaPeople and Information Research Team (PIReT)6th FAccTRec Workshop: Responsible RecommendationNORMalize: The First Workshop on Normative Design and Evaluation of Recommender SystemsACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)Coursera: Recommender Systems SpecializationLensKit: Python Tools for Recommender SystemsChris Anderson - The Long Tail: Why the Future of Business Is Selling Less of MoreFairness in Recommender Systems (in Recommender Systems Handbook)Ekstrand et al. (2022): Fairness in Information Access SystemsKeynote at EvalRS (CIKM 2022): Do You Want To Hunt A Kraken? Mapping and Expanding Recommendation FairnessFriedler et al. (2021): The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision MakingSafiya Umoja Noble (2018): Algorithms of Oppression: How Search Engines Reinforce RacismPapers:Ekstrand et al. (2018): Exploring author gender in book rating and recommendationEkstrand et al. (2014): User perception of differences in recommender algorithmsSelbst et al. (2019): Fairness and Abstraction in Sociotechnical SystemsPinney et al. (2023): Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information AccessDiaz et al. (2020): Evaluating Stochastic Rankings with Expected ExposureRaj et al. (2022): Fire Dragon and Unicorn Princess; Gender Stereotypes and Children's Products in Search Engine ResponsesMitchell et al. (2021): Algorithmic Fairness: Choices, Assumptions, and DefinitionsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsRaj et al. (2022): Measuring Fairness in Ranked Results: An Analytical and Empirical ComparisonBeutel et al. (2019): Fairness in Recommendation Ranking through Pairwise ComparisonsBeutel et al. (2017): Data Decisions and Theoretical Implications when Adversarially Learning Fair RepresentationsDwork et al. (2018): Fairness Under CompositionBower et al. (2022): Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender SystemsZehlike et al. (2022): Fairness in Ranking: A SurveyHoffmann (2019): Where fairness fails: data, algorithms, and the limits of antidiscrimination discourseSweeney (2013): Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertisingWang et al. (2021): User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided MarketsGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
    続きを読む 一部表示
    1 時間 43 分
  • #15: Podcast Recommendations in the ARD Audiothek with Mirza Klimenta
    2023/04/27

    In episode 15 of Recsperts, we delve into podcast recommendations with senior data scientist, Mirza Klimenta. Mirza discusses his work on the ARD Audiothek, a public broadcaster of audio-on-demand content, where he is part of pub. Public Value Technologies, a subsidiary of the two regional public broadcasters BR and SWR.

    We explore the use and potency of simple algorithms and ways to mitigate popularity bias in data and recommendations. We also cover collaborative filtering and various approaches for content-based podcast recommendations, drawing on Mirza's expertise in multidimensional scaling for graph drawings. Additionally, Mirza sheds light on the responsibility of a public broadcaster in providing diversified content recommendations.

    Towards the end of the episode, Mirza shares personal insights on his side project of becoming a novelist. Tune in for an informative and engaging conversation.

    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.

    • (00:00) - Episode Overview
    • (01:43) - Introduction Mirza Klimenta
    • (08:06) - About ARD Audiothek
    • (21:16) - Recommenders for the ARD Audiothek
    • (30:03) - User Engagement and Feedback Signals
    • (46:05) - Optimization beyond Accuracy
    • (51:39) - Next RecSys Steps for the Audiothek
    • (57:16) - Underserved User Groups
    • (01:04:16) - Cold-Start Mitigation
    • (01:05:06) - Diversity in Recommendations
    • (01:07:50) - Further Challenges in RecSys
    • (01:10:03) - Being a Novelist
    • (01:16:07) - Closing Remarks

    Links from the Episode:
    • Mirza Klimenta on LinkedIn
    • ARD Audiothek
    • pub. Public Value Technologies
    • Implicit: Fast Collaborative Filtering for Implicit Datasets
    • Fairness in Recommender Systems: How to Reduce the Popularity Bias

    Papers:

    • Steck (2019): Embarrasingly Shallow Auoencoders for Sparse Data
    • Hu et al. (2008): Collaborative Filtering for Implicit Feedback Datasets
    • Cer et al. (2018): Universal Sentence Encoder

    General Links:

    • Follow me on Twitter: https://twitter.com/MarcelKurovski
    • Send me your comments, questions and suggestions to marcel@recsperts.com
    • Podcast Website: https://www.recsperts.com/
    続きを読む 一部表示
    1 時間 19 分