Newsletters




The Dark Side of AI: Bias and Fairness at Data Summit 2025


While the technological hype of AI is seemingly endless and ripe with possibilities, what this excitement hides is a darker side: The ethics and bias of AI, and the harmful outcomes as a result of unfair systems. 

Parul Gupta, software engineer, led the Data Summit session, “Realizing the Promise of Machine Learning,” taking a look at the nuances of AI fairness and bias, and how constraints will play a major role in developing ethical AI systems. 

The annual Data Summit conference returned to Boston, May 14-15, 2025, with pre-conference workshops on May 13.

Bias exists in many forms, from cognitive bias to social, cultural, and implicit. Each form of bias must be taken into account when examining the applications of AI, though Gupta homed in on implicit bias. Snap judgements, unintentional, and difficult to measure, implicit bias “is so ingrained in our society, it becomes normal thinking.” 

For instance, Gupta referenced the fact that most tech CEOs are male, and, according to her “this doesn’t surprise me. And it made me think, why doesn’t it surprise me?”

This is an example of implicit bias, the unconscious beliefs and sentiments that are held just outside of our realm of awareness. And, importantly, implicit bias is something that AI can learn and persist. The relationship between AI and bias is a socio-technical problem, revealing how AI fairness is crucial in ensuring that all social groups have an equitable experience with AI.  

Gender bias and race bias in AI systems have already landed many organizations in hot water, making AI fairness not only an ethical concern, but a business-oriented one. 

‘We see a lot of biases amplified by AI models,” said Gupta. “A famous example is Amazon’s hiring model, which began to downgrade women’s applications.” The introduction of this sort of bias can occur during any phase of the AI lifecycle, from measurement to preprocessing, model learning, model definition, and deployment. 

“Doesn’t that scare you? Especially as it’s been made clear that AI is here to stay,” said Gupta. “[So,] how do we deal with bias?”

Gupta called for a recognition of AI systems as never neutral. They represent the values of the people involved in its development. And while we still live in an unjust world—and AI will inevitably relay that unfairness—Gupta suggested that enterprises examine the outcomes and harms of bias in order to effectively tackle the challenge as a whole.

Fairness by groups, or adding the constraints to an optimization problem, incorporates the context of bias in fairness’ theoretical and mathematical application. Gupta described the following constraints, as well as their pros and cons: 

  • Demographic Parity: Each group has equal probability of a positive result. While offering equal proportions of represented groups, there is a risk of gerrymandering.
  • Equalized Odds: Each group has equal probability of a positive result, independently conditional on the true outcome. While providing some quality of service, any disparity in groups in training will stay. 
  • Equal Opportunity: Each group has equal probability of true positive parity. 

“One constraint doesn’t fit everything,” said Gupta. “At the same time, all constraints do not fit.” 

Gupta emphasized that spreading awareness of AI and bias is a crucial tool in increasing AI fairness. Outside of mathematical probabilities and technological constraints, discussing and examining AI ethics and fairness with others is something that can be done today. In the same vein, Gupta asserted that contributing to the open source community—such as Fairlearn and AI for People—further uplifts AI fairness, transparency, and explainability. 

Many Data Summit 2025 presentations are available for review at https://www.dbta.com/datasummit/2025/presentations.aspx


Sponsors