By John Walubengo
Artificial Intelligence (AI) has been getting positive reviews since that AI-driven ChatGPT robot hit the headlines. But very rarely do we hear about the dark side of AI, which incidentally is equally important and should be grabbing the headlines in equal measure.
Today we get a chance to review some of the dark sides of AI, commonly known as the potentially harmful effects that AI can have on individuals or society in general.
Before we start, some disclaimers may be necessary. The intention is not to discourage the use or adoption of AI but rather to highlight the risks and encourage more caution for both the developers and general users.
By now, we have become so accustomed to positive profiling that we don’t even realize it is AI-driven. Commonly known as recommender engines, customers are used to getting recommendations based on what the AI logic or algorithm can predict based on their previous history.
The algorithm can recommend or predict your preferred next movie title, a news feed, restaurants, financial products, consumer products or whatever based on your transaction history or digital activities as profiled against thousands of similar data points it has access to.
In most cases, the algorithms are spot-on and very helpful since they fit in perfectly with your profile. No wonder digital marketing is the new normal and a billion-dollar business.
Rather than place adverts the traditional way, where a given Ad is seen by thousands of viewers, most of whom find the marketed products unsuitable and ignore them, the AI algorithms can provide tailored adverts to different users at the right time and place.
Digital marketing supported by AI has a higher conversion rate since it can know what you need before searching. And often, that is a good thing.
That AI algorithm that was used to shortlist ten male candidates out of a thousand applicants for a single CEO job could be harmful since it uses a historical data set that is inherently biased against female candidates.
AI algorithms can therefore perpetuate inherent, real-world discrimination in the digital realm.
That AI algorithm that keeps recommending your favourite news feed, movie, product or restaurant with precision can lock you into a very limited scope of mind or choice at a personal level.
Sometimes called ‘echo chambers’, algorithms have created an environment where individuals encounter only what the algorithms feed them, based on their previous digital footprint in terms of ‘likes’, “retweets,” and ‘followers’ amongst other synthetic social constructs.
AI algorithms can therefore increase the number of individuals with ‘tunnel-vision’ who are shielded from and lack the benefit of alternative views they would have otherwise encountered in traditional media and other offline conversations.
Worse still, the algorithms can know the location, mood and disposition of these ‘tunnel-visioned’ individuals. They can easily weaponize them for or against political, commercial, or other causes.
Data Protection Interventions
Profiling supported by AI algorithms can present significant risks for individuals’ rights and freedoms and require appropriate safeguards. The Kenya Data Protection Act(2019) attempts to regulate this new AI domain by stating in Section 35(1) as follows:
Every data subject has a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning or significantly affects the data subject.
How effective this section is in addressing the dark sides of AI will be a subject in one of our future articles.
John Walubengo is an ICT Lecturer and Consultant. @jwalu.