The cost of exclusion in AI
AI is shaping the future—who gets to shape It?
When only a narrow group of people build the AI systems shaping our world, the result isn’t just imbalance, it’s injustice.
This isn’t something that could happen in the future. It’s happening right now. It’s affecting hiring, healthcare, housing, education, criminal justice, and even how we express ourselves online. When systems make decisions about who gets a job, a loan, medical attention, or police surveillance, it matters who built that system, whose data trained it, and who was never included in the first place.
Most of today’s AI systems are shaped by a small group of people, primarily privileged men. And when other perspectives are excluded, the result isn’t just oversight, it’s built-in bias. It only takes absence, and a system that never questioned who wasn’t in the room.
Real world impacts
Hiring Bias
Algorithms used in hiring have been found to filter out resumes from women and people with ethnic-sounding names. When the training data reflects past discrimination, the system keeps inequality going.
Discrimination in Healthcare
A 2019 study found that a healthcare algorithm assigned lower risk scores to Black patients, even when they were sicker, leading to less care and fewer referrals. When AI is used in insurance, treatment planning, or risk analysis, it can reinforce systemic racism in medicine.
Predictive Policing and Economic Harm
Predictive policing algorithms use past crime data to forecast where crime may occur. But if law enforcement historically over-policed Black and Brown neighborhoods, AI sees those places as perpetual “high-risk” zones. Predictive policing algorithms often target minority neighborhoods, leading to over-policing and a higher likelihood of arrests for minor offenses.
