AI is being increasing used in research.Photo: Shutterstock Images

AI biases: what they are and how to mitigate them

By John Walubengo

As artificial intelligence (AI) increasingly infiltrates diverse sectors—healthcare, judiciary, agriculture, media, employment, and others—it’s essential to recognise and address AI bias. AI bias is a phenomenon where AI systems generate skewed outcomes that can favour one group over others.  

It is not only a technical challenge but also a profoundly ethical issue.

AI bias manifests in several forms, each capable of causing significant unfairness in society. The three fundamental forms include data, algorithmic, and confirmation AI biases, as expounded below:

Types of Biases

Data bias

This arises when the training data sets used to train AI models are not representative enough. For example, an image recognition system may be unable to identify black people because it was trained mainly on pictures of white people. This could, for example, result in a higher number of false positives of black passengers going through airport automated security checks as compared to white passengers.

Another common example is when the AI system is asked to generate an image of a Surgeon, it is more likely to produce a picture of a male than a female surgeon. This stereotyping occurs if the system is trained on more examples of male surgeons than female surgeons.

Algorithm bias

This arises when the training data sets reflect real-world injustices. A common example is when an AI-driven recruitment tool favours male candidates over female candidates for top positions simply because that is the reality of today’s world. 

The AI system, therefore, learns from historical hiring data that is inherently male-biased, thus perpetuating this bias in its online selection process. Another cited example is when judges in the US used an AI tool to determine who among a list of candidate prisoners qualified for parole.

More often than not, the tool favoured white prisoners over black prisoners for parole simply because it learned from the existing parole data that was inherently biased against black minorities.

Confirmation Bias

This occurs during the pre-processing stages of AI development. For example, in the financial sector, AI systems are often used to assess the creditworthiness of loan applicants. 

Confirmation bias can seep into the system if the data or the parameters set by the developers reflect subconscious beliefs about the financial reliability of certain demographic groups. For example, if a developer believes that younger people are less reliable in repaying loans, they might unconsciously design the AI to weigh age more heavily than other factors. This could lead the AI to reject loan applications from younger individuals more frequently, reinforcing the developer’s original bias.

The repercussions of AI bias are significant and multifaceted. In law enforcement, it can lead to unjust profiling; in hiring, it can prevent fair employment opportunities. In the healthcare sector, it might skew diagnoses and treatments. Such biases undermine the credibility and utility of AI, potentially causing societal mistrust and systemic inequities.

Strategies for Mitigation

Mitigating AI bias involves several proactive steps, some of which are discussed below:

Ensuring Diverse Data Sets 

To combat data bias, AI models must be trained on representative data. For instance, facial recognition technologies must be developed using diverse datasets encompassing various ethnic backgrounds, ages, and genders to prevent discriminatory outcomes.

Conducting Regular Audits

Regular technical AI audits can identify and mitigate algorithm biases. For example, ongoing reviews of AI-driven hiring tools can help ensure these systems do not perpetuate historical biases, adjusting algorithms to ensure fairness.

Maintaining Algorithm Transparency

Transparency in how algorithms operate can help identify biases. In healthcare, where algorithms determine patient treatment plans, understanding the decision-making process can ensure that it does not inadvertently disadvantage certain groups.

Adhering to Ethical Guidelines

Ethical guidelines can guide the equitable development of AI. For instance, guidelines prioritising fairness helped modify a biased predictive policing tool to ignore data influenced by observable patrol patterns rather than actual crime rates.

Addressing the challenge of AI bias requires a multifaceted approach, combining technical adjustments with ethical considerations. By integrating these strategies into AI development and deployment, we can mitigate bias and work towards more equitable outcomes. 

This commitment to fairness in AI is not just about improving technology but about fostering a society that values and protects the dignity and rights of all its members. It’s critical to have AI policies that promote a future where technology serves humanity without discrimination.

John Walubengo is an ICT Lecturer and Consultant. @jwalu.


 

Loading

David Indeje information

Related Posts

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.