Aims and Scope
Search and recommendation are getting closer and closer as research areas. Though they require fundamentally different inputs, i.e., the user is asked to provide a query in search, while implicit and explicit feedback is leveraged in recommendation, existing search algorithms are being personalized based on users' profiles and recommender systems are optimizing their output on the ranking quality.
Both classes of algorithms aim to learn patterns from historical data that conveys biases in terms of imbalances and inequalities. These hidden biases are unfortunately captured in the learned patterns, and often emphasized in the results these algorithms provide to users. When a bias affects a sensitive attribute of a user, such as their gender or religion, the inequalities that are reinforced by search and recommendation algorithms even lead to severe societal consequences, like users discrimination.
For this critical reason, being able to detect, measure, characterize, and mitigate these biases while keeping high effectiveness is a prominent and timely topic for the IR community. Mitigating the effects generated by popularity bias, ensuring results that are fair with respect to the users, and being able to interpret why a model provides a given recommendation or search result are examples of challenges that may be important in real-world applications. This workshop aims to collect new contributions in this emerging field and to provide a common ground for interested researchers and practitioners.
Special Issue
We have arranged a Special Issue, devoted to the workshop topics, on the journal "Information Processing & Management" (Elsevier; Impact Factor: 4.787). We solicit different types of contributions (research papers, surveys, replicability and reproducibility studies, resource papers, systematic review articles) on bias and fairness in search and recommendation.
Details: Call for Papers of the Special Issue on the journal website.
Proceedings
On April 14, 2020, the International Workshop on Algorithmic Bias in Search and Recommendation was held online, in conjunction with the 42nd European Conference on Information Retrieval (ECIR 2020). The workshop had more than 70 participants. The keynote speaker was Prof. Chirag Shah from the University of Washington (United States). The scientific program included demo and paper presentations. The papers covered topics that go from search and recommendation in online dating, education, and social media, over the impact of gender bias in word embeddings, to tools that allow to explore bias and fairness on the Web. The event concluded with a discussion session aimed at highlighting open issues and research challenges.
Download: Full Report published into the SIGIR Forum
Download: Workshop Proceedings published by Springer CCIS
Keynote
Prof. Chirag Shah University of Washington (USA)
Title: Investigating Bias and Instigating Fairness in Search and Recommendation.
Abstract: Bias is omnipresent -- from data to algorithms, and from framing of a problem to interpreting its solution. In this talk, I will highlight how such bias in general with machine learning techniques, and in particular with search and recommender systems cause material problems for users, businesses, and society at large. The examples span areas of search, education, and health. I will then introduce the idea of marketplace as a way to find a balance or fairness in the system and address the issue of bias, among other things. I will draw specific examples from our work on search and recommendation systems to demonstrate that achieving fairness in a marketplace and addressing bias in data and algorithms are not just morally and ethically right things to do, but could also lead to a more sustainable growth for various industries, governments, and our scientific advancement.
About the speaker: Chirag Shah is an Associate Professor in Information School (iSchool) at University of Washington (UW) in Seattle. Before UW, he was a faculty at Rutgers University. His research interests include studies of interactive information retrieval/seeking, trying to understand the task a person is doing and providing proactive recommendations. In addition to creating task-based IR systems that provide more personalized reactive and proactive recommendations, he is also focusing on making such systems transparent, fair, and free of biases. Dr. Shah received his MS in Computer Science from University of Massachusetts (UMass) at Amherst, and PhD in Information Science from University of North Carolina (UNC) at Chapel Hill. His research is supported by grants from National Science Foundation (NSF), National Institute of Health (NIH), Institute of Museum and Library Services (IMLS), Amazon, Google, and Yahoo. He spent his sabbatical in 2018 at Spotify working on voice-based search and recommendation problems. In 2019, as an Amazon Scholar, he worked with Amazon’s Personalization team on applications involving personalized and task-oriented recommendations. He is the recipient of Microsoft BCS/BCS IRSG Karen Spärck Jones Award 2019. More information about Dr. Shah can be found at http://chiragshah.org/.
Final Program
The ongoing worldwide COVID-19 situation pushed ECIR 2020 organizers to make both the conference and the workshops an open online event through Zoom.
As chairs of Bias2020@ECIR Workshop - International Workshop on Algorithmic Bias in Search and Recommendation, we are pleased to invite you to the online version of the workshop. The virtual room will open at https://us04web.zoom.us/j/464990256 on Apr 14, 2020 08:45 - Lisbon Local Time for anyone who wish to attend the workshop (talks, demos, keynote, and so on) and participate to the open discussion on this important research path.Please contact by email the Workshop's Chairs to obtain access. | |
All workshop-day timings refer to April 14, 2020 - Lisbon Local Time.
Timing | Content |
---|---|
09:00 - 09:20 | Welcome Message and Connection Setup |
09:20 - 10:50 | Bias Session #1 |
09:20 - 09:45 | Bias Goggles - Exploring the bias of Web Domains through the Eyes of the Users G. Konstantakis, I. Promponas, M. Dretakis, P. Papadakos |
09:45 - 10:05 | Facets of Fairness in Search and Recommendation S. Verma, R. Gao, C. Shah |
10:05 - 10:30 | Mitigating Gender Bias in Machine Learning Data Sets S. Leavy, G. Meaney, K. Wade, D. Greene |
10:30 - 10:50 | Why do we need to be bots? What prevents society from detecting biases in recommendation systems T. D. Krafft, M. P. Hauer, K. A. Zweig |
10:50 - 11:20 | Coffee Break |
11:20 - 12:50 | Bias Session #2 |
11:20 - 11:45 | Matchmaking Under Fairness Constraints: a Speed Dating Case Study D. Paraschakis, B. J. Nilsson |
11:45 - 12:05 | Effect of Debiasing on Information Retrieval E. Gerritse, A. de Vries |
12:05 - 12:30 | Using String-Comparison measures to Improve and Evaluate Collaborative Filtering Recommender Systems L. M. Lustosa Pascoal, H. A. Dantas Do Nascimento, T. Couto Rosa, E. Queiroz da Silva, E. Lima Aleixo |
12:30 - 12:50 | Recommendation filtering à la carte for intelligent tutoring systems W. Perreira, M. Spalenza, J.-R. Bourguet, E. de Oliviera |
12:50 - 14:00 | Lunch Break |
14:00 - 15:00 |
Keynote (Main Conference) Focusing the macroscope: how we can use data to understand behavior Joana Gonçalves de Sá |
15:00 - 15:15 | Coffee Break |
15:15 - 16:05 |
Keynote (BIAS Workshop) Investigating Bias and Instigating Fairness in Search and Recommendation C. Shah |
16:05 - 16:50 | Social Aspects Session #1 |
16:05 - 16:50 | Analyzing the Interaction of Users with News Articles to Create Personalization Services A. Celi, R. Eramo, A. Piad, J. Diaz Blanco A Novel Similarity Measure for Group Recommender Systems with Optimal Time Complexity G. Ramos, C. Caleiro Improving News Personalization through Search Logs X. Bai, B. B. Cambazoglu, F. Gullo, A. Mantrach, F. Silvestri What kind of content are you prone to tweet? Multi-topic Preference Model for Tweeters L. Recalde, R. Baeza-Yates Beyond Accuracy in Link Prediction J. Sanz-Cruzado, P. Castells Venue Suggestion Using Social-Centric Scores M. Aliannejadi, F. Crestani Data Pipelines for Personalized Exploration of Rated Datasets S. Amer-Yahia, A. Tho Le, E. Simon The Impact of Foursquare Checkins on Users’ Emotions on Twitter S. A. Mirlohi Falavarjani, H. Hosseini, E. Bagheri Enriching Product Catalogs with User Opinions T. de Melo, A. da Silva, E. de Moura, P. Calado |
16:50 - 17:20 | Open Discussion |
17:20 - 17:30 | Concluding Remark |
Important Dates
- Submissions:
January 27, 2020February 3, 2020 - Notifications:
February 27, 2020March 11, 2020 - Camera-Ready:
March 30, 2020April 12, 2020 - Workshop: April 14, 2020 - ONLINE
All deadlines are 11:59pm, AoE time (Anywhere on Earth).
Topics
We solicit contributions in topics related to algorithmic bias in search and recommendation, focused (but not limited) to:
- Data Set Collection and Preparation:
- Managing imbalances and inequalities within data sets
- Devising collection pipelines that lead to fair and unbiased data sets
- Collecting data sets useful for studying potential biased and unfair situations
- Designing procedures for creating synthetic data sets for research on bias and fairness
- Countermeasure Design and Development:
- Conducting exploratory analysis that uncover biases
- Designing treatments that mitigate biases (e.g., popularity bias mitigation)
- Devising interpretable search and recommendation models
- Providing treatment procedures whose outcomes are easily interpretable
- Balancing inequalities among different groups of users or stakeholders
- Evaluation Protocol and Metric Formulation:
- Conducting quantitative experimental studies on bias and unfairness
- Defining objective metrics that consider fairness and/or bias
- Formulating bias-aware protocols to evaluate existing algorithms
- Evaluating existing strategies in unexplored domains
- Case Study Exploration:
- E-commerce platforms
- Educational environments
- Entertainment websites
- Healthcare systems
- Social networks
Submission Details
All submissions must be written in English. Authors should consult ECIR paper guidelines and Fuhr’s guide to avoid common IR evaluation mistakes, for the preparation of their papers. Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either LaTeX or Word. Papers should be submitted as PDF files to Easychair at https://easychair.org/conferences/?conf=bias2020. Please be aware that at least one author per paper needs to register and attend the workshop to present the work.
We will consider three different submission types:
- Full papers (14 pages) should be clearly placed with respect to the state of the art and state the contribution of the proposal in the domain of application, even if presenting preliminary results. In particular, research papers should describe the methodology in detail, experiments should be repeatable, and a comparison with the existing approaches in the literature should be made.
- Short papers (8 pages) should introduce new point of views in the workshop topics or summarize the experience of a researcher or a group in the field. Practice and experience reports should present in detail real-world scenarios in which search and recommender systems are exploited.
- Demo papers (4 pages) should present a prototype or an application that employs search and recommender systems in the workshop topics. The systems will be shown at the workshop.
Submissions should not exceed the indicated number of pages, including any diagrams and references.
We expect authors, PC, and the organizing committee to adhere to the ACM’s Conflict of Interest Policy and the ACM’s Code of Ethics and Professional Conduct.
Committees
Workshop Chairs
- Ludovico Boratto, Eurecat - Centre Tecnológic de Catalunya (Spain)
- Stefano Faralli, Unitelma Sapienza University of Rome (Italy)
- Mirko Marras, University of Cagliari (Italy)
- Giovanni Stilo, University of L’Aquila (Italy)
Program Committee
- Himan Abdollahpouri, University of Colorado Boulder (United States)
- Luca Aiello, Nokia Bell Labs (United Kingdom)
- Mehwish Alam, FIZ Karlsruhe - Karlsruhe Institute of Technology (Germany)
- Marcelo Armentano, National University of Central Buenos Aires (Argentina)
- Solon Barocas, Microsoft Research - Cornell University (United States)
- Alejandro Bellogin, Universidad Autónoma de Madrid (Spain)
- Asia Biega, Microsoft Research (United States)
- Glencora Borradaile, Oregon State University (United States)
- Federica Cena, University of Turin (Italy)
- Pasquale De Meo, University of Messina (Italy)
- Sarah Dean, University of California Berkeley (USA)
- Danilo Dessì, FIZ Karlsruhe - Karlsruhe Institute of Technology (Germany)
- Laura Dietz, University of New Hampshire (United States)
- Damiano Distante, Unitelma Sapienza University of Rome (Italy)
- Carlotta Domeniconi, George Mason University (United States)
- Michael Ekstrand, Boise State University (United States)
- Francesco Fabbri, Universitat Pompeu Fabra (Spain)
- Golnoosh Farnadi, Mila - University of Montreal (Canada)
- Nina Grgic-Hlaca, Max Planck Institute for Software Systems (Germany)
- Rossi Kamal, Kyung Hee University (South Korea)
- Toshihiro Kamishima, AIST (Japan)
- Karrie Karahalios, University of Illinois (United States)
- Aonghus Lawlor, University College Dublin (Ireland)
- Cataldo Musto, University of Bari Aldo Moro (Italy)
- Razieh Nabi, Johns Hopkins University (United States)
- Federico Nanni, The Alan Turing Institute (United Kingdom)
- Alexander Panchenko, Skolkovo Institute of Science and Technology (Russia)
- Panagiotis Papadakos, University of Crete (Greece)
- Emma Pierson, Stanford University (United States)
- Simone Paolo Ponzetto, Universität Mannheim (Germany)
- Alessandro Raganato, University of Helsinki (Finland)
- Babak Salimi, University of Washington (United States)
- Fabrizio Silvestri, Facebook (United Kingdom)
- Antonela Tommasel, National University of Central Buenos Aires (Argentina)
- Kyle Williams, Microsoft Research (United States)
- Eva Zangerle, University of Innsbruck (Austria)
- Markus Zanker, Free University of Bolzano-Bozen (Italy)
- Meike Zehlike, Max Planck Institute for Software Systems (Germany)
- Arkaitz Zubiaga, Queen Mary University of London (United Kingdom)
Contacts
For general enquiries on the workshop, please send an email to ludovico.boratto@acm.org, stefano.faralli@unitelmasapienza.it, mirko.marras@unica.it, and giovanni.stilo@univaq.it.