A letter block of DRIF 2024

Understanding Public Participation and Human Rights Gap in AI in East Africa

The ongoing Digital Rights and Inclusion Forum (DRIF24) in Accra, Ghana, generates insightful conversations around this year’s theme: “Fostering Rights and Inclusion in the Digital Age.” 

One of the sessions that stood out was Understanding the Public Participation and Human Rights Gap in AI in East Africa. This topic reflects KICTANet’s works around AI, like “AI biases: what they are and how to mitigate them”, among others.

The panel discussions highlighted the challenges of inclusive public participation in AI-related matters.

The following were the key takeaways:

  • The debate about AI recognition and regulation in East Africa is ongoing, and several organisations have picked up the issues and taken the discussions to the general population. This includes researchers like Babalanda Edmond from Uganda, who have analysed AI policies in 7 East African countries and identified gaps in human rights recognition.
  • Most AI regulations in East Africa rely on existing laws and policies rather than formulating new frameworks. Countries such as Mauritius, Uganda, Kenya, Tanzania, and Zambia are working to align their AI policies with UNESCO and African Commission recommendations to ensure ethical use and human rights protection.
  • Some countries also lack existing laws on AI but have proposed regulations and policies, while others have just strategies. For example, Uganda has a national investor relations strategy and a roadmap for AI policy development.

A collective call is for countries to align their AI policies with UNESCO and African Commission recommendations to ensure ethical use. This is more likely to ensure human rights protection and transparency.

The speakers also emphasised the importance of public participation in AI governance, citing examples from Mozilla’s research on AI. They warned against reinventing the wheel when it comes to public participation due to the potential for repeating past mistakes with data protection laws—a classic example of a slippery slope. 

Instead, it was proposed that we consider working with a human rights-based approach to AI governance, which involves both risk assessment and values-based considerations, reliance on a comprehensive governance framework that addresses both the technical and ethical aspects of AI development and deployment, and advocating for inclusive AI strategies to ensure there is enough data for AI training on marginalised groups.

Concerns were raised about government-led public discussions on AI. These concerns came to the fore because most of their engagements are prone to limited time frames, lack of participation, under-resourcing, and political influence.

AI also has the potential of being manipulated to reject local dialects and languages. This risk can be mitigated by opening up innovation and creating a sandbox environment for growth.

Other inputs included:

  • Gaps in understanding AI definition and its implications for public participation.                                                                                                                                                  
  • Language barriers in public participation and the importance of inclusivity                                                                                                                                                                             
  • Capacity building and advocating for cross-regional learning to address AI governance challenges                                                                                                                                                     
  • Role of journalists in holding tech companies accountable for their actions in the Global South.                                                                                                                                                                

It is essential to bridge the gap in understanding the definition of AI and its implications for public participation. There is also a need to address language barriers in public participation and advocate for inclusivity. By aligning AI policies with ethical and human rights considerations, we can ensure that AI is used for the benefit of all.

Follow the following link to access KICTANets reports, policy briefs and submissions on Artificial Intelligence, digital rights, data privacy and cyber security.

Nicodemus Nyakundi is the Digital Accessibility Program Officer at KICTANet. He has a background in Information Technology and is passionate about digital inclusivity.


 

Loading

Nicodemus Nyakundi information

Digital Accessibility for PWDs Program Officer at KICTANet. He has a background in Information Technology, and is passionate about digital inclusion and accessibility.

Related Posts

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.