Making Recommendations Fairer: A New Way to Guarantee Exposure for All


The Problem: Exposure Bias in Recommendations
As recommender systems become more widespread across digital platforms, concerns around fairness are coming to the forefront. Standard relevance-based ranking techniques, while effective at optimizing user satisfaction, often reinforce existing biases and contribute to the systematic underexposure of certain item groups. For example, on music or book platforms, content from independent or minority creators may be consistently ranked lower due to historical popularity patterns, even when it's just as relevant to the user's interests. This phenomenon is known as exposure bias, and it has the potential to perpetuate visibility gaps, reduce content diversity, and further marginalize creators who are already underrepresented in the ecosystem.
Exposure bias in recommender systems arises due to position bias: users are more likely to interact with items that appear at the top of recommendation lists. Even if an item is included in a recommendation list, its rank position strongly influences whether it receives user attention. Existing fairness-aware ranking methods have made progress in increasing group representation. However, many fail to address exposure explicitly, or do so without providing guarantees on its distribution across groups. Some methods also rely on large optimization models, which makes them too resource-heavy for real-world systems with large item catalogs.
Our Approach: A Scalable Fairness Framework
Our solution for this problem is a post-processing framework based on Integer Linear Programming (ILP) that re-ranks recommendation lists to satisfy fairness constraints. The framework is centered around two types of exposure constraints:
- Minimum Exposure: Ensures that each group receives at least a fixed proportion of the total exposure.
- Relative Exposure: Guarantees that the exposure of a protected group is at least a number of times the exposure of a non-protected group.
- Exposure is quantified using a logarithmic discount factor to account for position bias, consistent with established metrics like nDCG.
Notably, our ILP formulation involves at most K×|I| binary variables (where K is the number of recommendation positions and |I| the number of items), making it significantly more scalable than existing quadratic-size models. The technical details of the development can be quite complex for the general public. For those interested, we recommend reading our paper and welcome direct contact for further discussion.
What We Learned: Results and Insights
Our key empirical findings in offline data are:
- The framework increased exposure of disadvantaged items significantly, especially in long-tail scenarios.
- As expected, there was some trade-off in accuracy, especially with offline data. But the drop was modest and could be adjusted with a tunable parameter.
- More active users with longer interaction history often maintained the same level of accuracy, likely because they already engage with long-tail items.
Why It Matters? In many real-world scenarios (like music platforms, online marketplaces, or news apps) fair exposure isn't just nice to have; it's essential. Sometimes it's even part of business rules or regulatory compliance (e.g., guaranteeing visibility to all content providers). Our research provides a flexible, practical solution that balances fairness and accuracy, and can be deployed without redesigning the entire system. Moreover, promoting fair exposure can have long-term benefits for user engagement by encouraging exploration and increasing satisfaction across a broader range of content. Over time, this can also help diversify consumption patterns and expand the reach of underrepresented creators or products.
Use-Case Examples

For instance, consider a music streaming service where millions of tracks, from blockbuster hits to indie tracks, compete for listener attention. Moreover, in the original setup, 90 percent of plays cluster around the top 1 percent of songs. But by slipping our re‑ranking module into the pipeline, treating “independent artist” as the protected group and tuning our relative‑exposure weight, the service suddenly could boost those long‑tail tracks surfacing in users’ top‑20 recommendations.
In e‑commerce domain, for instance, an online marketplace could aim to give eco‑friendly and women‑owned brands a fighting chance against mass‑market names. By running our post‑processor model and enforcing a relative‑exposure boost for certified producers, the platform can nudge its carousel of items toward a greener balance. The expected result? Certified‑green products will see their click‑through rate climb, while shoppers’ average basket size remained steady.
Social feeds can also benefit from fairer exposure. For example, a photo-sharing app can use our framework to ensure that emerging creators (i.e., those with few followers) make up at least 40 percent of the top positions in each personalized feed. By boosting this group, the app can significantly increase new-creator engagement.
Conclusion
In summary, recommender systems go much beyond a single model. They are complex ecosystems involving not only prediction accuracy but also fairness, diversity, user experience, and alignment with broader organizational goals. Our work focuses on the often-overlooked post-processing stage, demonstrating that fairness can be effectively introduced without compromising system architecture or scalability. As the field moves toward more responsible and human-centered recommendation, incorporating exposure guarantees is a concrete step toward making these systems more inclusive, accountable, and aligned with real-world constraints. At Recombee, we align with this direction by incorporating principles of transparency, fairness, and practical deployment into our research and development efforts, with the goal of building recommender systems that are both effective and socially responsible.
References
Lopes, Ramon, Rodrigo Alves, Antoine Ledent, Rodrygo Santos, and Marius Kloft. "Recommendations with minimum exposure guarantees: A post-processing framework." Expert Systems with Applications 236 (2024): 121164.
Next Articles
2025 Sneak Peek
This year is already off to an exciting start, and we’re rolling out new tools to improve efficiency and optimize recommendations. Here’s what’s available and what’s coming next.

Build vs. Buy: Deciding the Best Approach for Your Recommender System
When it comes to deciding between buying a recommender system and building one from scratch, the choice isn’t always straightforward. Both options come with their own set of pros and cons...

Are You Here to Stay? Unraveling the Dynamics of Stable and Curious Audiences in Web Systems
Why do influencers frequently request their subscribers to enable all notifications for their channels? This practice stems from their awareness that not all subscribers are regular...
