So, the trending AI tool, ChatGTP has brought to light some ethical and legal dimensions that have plagued the AI industry for years. We look at some of them in this week’s article using AI use cases.
Imagine the year is 2035, and self-driving cars that have been mainstreamed in the US, Europe and China are now beginning to appear on Kenyan bumpy streets. It is a rush-hour morning, and you are sitting in a self-driving car going to work.
Suddenly the car runs over a school child who seems to have popped up suddenly from a blind spot and jumped onto the road. The child is in a bad state, and you are not sure if he will survive. She is rushed to the hospital by some good samaritan as you sit in and wait for the police to arrive and record the incident.
A lot goes through your mind as you wait in the self-driving car. Who is to blame? Who takes liability?
Who is to blame?
Is it the car manufacturer, or is it the car’s owner? Is it the software engineers who build the AI logic running the vehicle, or is the passenger in the car? Is it the Telco Service provider whose functional 5G network is a prerequisite to self-driving vehicles, or is it the insurer?
Before you answer these questions, the cops arrive and face the same questions. In everyday accident situations, they are used to lecturing the driver on the benefits of safe driving before grabbing their driving license and telling them to follow them to the nearest police station.
There is no driver in the car to lecture this time—only a passenger. Indeed, an impatient passenger who is late for work wants to get out of the accident situation in the shortest time possible.
You are late for your class and decide you are done with these fancy self-driving cars and now want to flip back to the good old days of Uber or Littlecab. So, you tell the cop to scan the QR code on the windscreen for the contacts of the owner of the self-driving car and take up the matter with them as you jump into your arriving Littlecab and head to the University.
ChatGPT does assignments
Despite the morning misfortune, you manage to take your programming class and leave the students with some assignments to be submitted on the student portal, hoping to review and mark over the weekend and review the same the following week.
You sit in your office and take a well-deserved coffee break as you prepare to head into your next class session. As you browse online through your next lesson notes, your notice that all students have already submitted their programming assignment – the one you thought they would need a week to complete.
You start feeling that you have issued the wrong assignment. Still, after sampling and reviewing a few of the student submissions, you realized that not only was it the correct assignment but that the students had produced some answers that were better than your old marking scheme.
The students used the new AI tool, ChatGPT, to complete the weekly assignment in less than a minute.
More questions start going through your mind.
Is this ethical? Should the university block the AI tool? Is that even the correct intervention considering that you are training students to develop these tools and use them to support their current and future work?
Is it time lecturers reviewed and upgraded their pedagogy, otherwise known as their philosophy, methods and strategies of teaching in light of developments in the AI sector?
There are no clear-cut answers to some of these questions. What is, however, clear is that the AI sector is in a bullish position and will affect and touch on all sectors of the economy in ways that we have yet to begin to think about.
Are we ready for the next wave of AI technologies?