Vedant Hathalia shares about his experiences with AI and what he thinks can be improved.
Growing up, I thought AI was this neutral, objective force. Pure math and logic. Then I started seeing it in practice. Insurance eligibility decisions made by algorithms. Hiring systems that screen candidates before human eyes ever see them. Policing tools that predict where crime will happen next. Efficient, right? Objective even.
But I’ve seen firsthand how these systems can go wrong. AI learns from historical patterns. When those patterns reflect gaps or biases, algorithms amplify them. In healthcare, if certain communities weren’t well-represented in medical studies or claims data, algorithms trained on that data will make worse decisions for those groups. In hiring, if past applicants came from limited backgrounds, AI perpetuates that homogeneity. In policing, predictive systems send officers to neighborhoods based on past arrest records, creating cycles that are hard to break.
What worries me most is how we’re moving backward on solutions. The primary difficulty in these systems is datasets where the voices of marginalized communities aren’t well represented. The broad removal of DEI policies has eliminated funding that specifically went toward gathering better, more inclusive data, especially in healthcare. We’re making the problem worse just when we need to be doing better.
As a South Asian American, I’m proud to celebrate my heritage. I’m also aware that representation matters in ways we don’t always see. When the data doesn’t include us, the systems built on that data don’t work for us. When our experiences aren’t captured, our needs get overlooked. The solution exists.
