This study explores how the use of artificial intelligence (AI) in dating apps can reinforce or mitigate biases against marginalized stakeholders groups, including racial minorities, women, and LGBTQ+ users. While AI improves user engagement and safety, it also risks reinforcing systemic biases, such as racial and gender-based discrimination. Through a qualitative thematic analysis of publicly available data, this research examines AI implementation in popular dating apps like Tinder, Bumble, and Grindr. Findings show that AI-driven algorithms often prioritize engagement over fairness, disproportionately excluding marginalized groups. The study highlights AI9s dual role: it can mitigate harm through features like harassment detection but also perpetuates inequalities when unethically designed. Theoretical contributions include applying stakeholder theory and ethical AI frameworks to digital matchmaking, emphasizing companies' ethical responsibilities to marginalized users. Practical recommendations focus on algorithmic transparency, fairness-aware machine learning, and inclusive AI governance. This research highlights the need for ethical AI practices in dating apps to ensure inclusivity and fairness, contributing to broader discussions on AI ethics and digital discrimination.
| Date of Award | 23 Apr 2025 |
|---|
| Original language | English |
|---|
| Awarding Institution | - Universidade Católica Portuguesa
|
|---|
| Supervisor | Rosa Fioravante (Supervisor) |
|---|
- AI bias
- Marginalized stakeholders
- Dating apps
- Algorithmic discrimination
- Ethical AI
- Stakeholder theory
AI biases and marginalized stakeholders: inquiry into AI implementation practices in dating apps
Lesnikova, E. (Student). 23 Apr 2025
Student thesis: Master's Thesis