An artist’s illustration of artificial intelligence (AI). This image depicts how AI could adapt to an infinite amount of uses. It was created by Nidia Dias

KICTANet Initiates Dialogue on AI Policy and Regulation in Kenya

By John Walubengo

Last week, KICTANet held a very successful AI Round Table, with key stakeholders contributing different perspectives and insights around AI policy, regulation, and opportunities.

The summary of the AI Policy Roundtable has been well covered here but today we just want to recast the same in terms of what it means for Kenya.

At the moment, there are several global initiatives led by the US, the European Union, and China in a race to shape the global policy and regulatory direction for AI.

The US seemingly approaches the issue from a safety and security perspective, the European Union seems to prefer its human rights and data protection approach, while it remains unclear what the philosophy behind the Chinese AI industry is.

In a competitive global environment where each leading actor is trying to claim the ‘first mover’ advantage, strong and excessive regulatory constraints can often mean less agility to innovate and deploy.

Oftentimes, AI developers and practitioners in unregulated markets tend to have an advantage over those operating in a more regulated environment.

On the other hand, unregulated environments pose significant—some would claim—existential dangers.  If the AI sector is left unregulated, that is, left to its own devices in terms of what it can pursue, how it can pursue it, and when it can pursue it, there could be a proliferation of irresponsible AI.

To regulate AI—or not?

This challenge can be framed in terms of: What, if any, should an AI regulatory framework look like?

The answer perhaps lies somewhere in the middle of the regulatory spectrum. The regulatory sweet spot should therefore lie between the strict, prescriptive regulation on one end and the liberal, free-for-all, unregulated environments.

That sweet spot is often described as the self-regulatory framework, where industry players and civil society groups come together to prescribe some code of conduct and present the same to the regulatory authorities for endorsement and oversight.

Whereas the self-regulatory approach is fairly standard and practiced, for example, in the media, advertising, and credit card industries, it may not address all the multiple dimensions that are obtained within the AI industry.

The director of a think tank, the Diplo Foundation, Prof Kurbalija proposes an AI Pyramid which is a framework to discuss the various facets and possibly different interventions for each of the four layers that constitute the AI Pyramid.

The Hardware and Data Layers

The bottom foundational layer is the hardware layer, which relates to the enormous storage and computing power required to host large amounts of data and provide the computing power needed to drive the prerequisite learning process for AI models.

At a global level, there is a race to produce the latest supercomputing power to support current and future generations of AI-related data and models.

For emerging economies like Kenya, it means we shall continue to purchase computing and storage power from developed economies as provisioned by the cloud and data centre infrastructures.  This in itself is a barrier to our AI developers, who must incur a relatively higher price for accessing these computing resources.

What, then, should our response be to this reality? This is an area for further discussion.

The next layer in the Diplo AI Pyramid is the data layer.

AI eats and sleeps data. It has it for breakfast, lunch, and dinner. Indeed, the more data, the merrier. Issues of concern and discussion range from how that data is acquired, stored, and processed to whether that data is representative enough to avoid bias, whether it is copyrighted, and whether the data owners should have a more equitable share of the value it generates for the AI system owners.

Additionally, AI datasets tend to have many features or fields, sometimes called data points, most of which require manual data labelling, a human labour-intensive process that is often pushed to be done in developing countries.

Whereas this is a useful source of digital employment for the youth, it is often the entry-level, bottom-of-the-pyramid type of AI work with its many labour-related challenges.

What should be our interventions in terms of building and deploying talent at the higher levels of the AI ‘food chain’?  How can we ensure that Kenyan tech talent is developed and deployed to play at the higher levels of the AI value chain? 

The AI Model and Application Layers

This brings us to the next layer, which is building AI models

If data is the oil for AI systems, then AI models the engine that drives the whole system.

AI models are mathematical structures that are unleashed on the data for them to ‘learn’ from the past datasets and subsequently make predictions about future events that the AI machine is yet to encounter.

AI models, for example, make it possible for the machine to recommend your next preferred movie title based on what you liked previously; they can recommend your next item to buy based on the product you bought previously, or even recommend whether your transaction is fraudulent based on past fraudulent transactions.

While these insights and recommendations are, to a large extent, very useful, they are not always correct, and some are outright harmful.

Who, for example, takes responsibility when a recommended AI machine action results in significant harm to an individual?

If your self-driving car is involved in a fatal accident, who takes liability? Is it the car manufacturer, the AI software developer, the 5G network that delayed delivering the stop signal, or the passenger who failed to engage the human override in the system?

There are no clear answers to these issues at the moment; they need more discussion.

The risks around excessive surveillance and mistaken identity arising from AI image recognition systems falsely identifying individuals as criminals are well documented.

Additionally, the political use of recommender systems to swing elections without voters being aware they are being nudged in one way or another is increasingly becoming a global issue, as documented during the Cambridge Analytica investigations of the US 2016 elections.

What would be the Kenyan response to these potentially harmful AI-driven decisions?

The final layer in the Diplo AI pyramid relates to the application layer

This is the interface that ‘Wanjiku’ sees when interacting with the other three underlying AI layers that may not be visible to the non-technical mind.

This layer also has issues, including but not limited to the data that Wanjiku types into the interface. That data may sometimes contain very sensitive personal information that Wanjiku has not entirely comprehended in terms of what else the AI system owners would do with that data.

Similarly, use cases of AI in all sectors of our lives, such as media, education, film, and the art industry, bring to question the issue of originality and who should be credited and compensated for AI-assisted works.

There are no clear answers to all these questions. And that is why the conversations at the KICTANet AI Roundtable should be the beginning and not the end. Let’s have some more.

John Walubengo is an ICT lecturer and Consultant. @jwalu.

Loading

David Indeje information

Related Posts

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.