Gendered disinformation is targeted at individuals based on their gender or exploits gendered narratives for political, social, or economic gain.
Disinformation refers to the deliberate creation of false information with the intent of harming an individual, social group, organization, or country. Disinformation campaigns often employ true, distorted, or emotionally charged content that lacks factual accuracy. It’s important to note that the information generated can be false or manipulated.
Gendered disinformation and hate speech are not a new challenge, and organizations worldwide have continuously tried to mitigate their occurrence and consequences, with varying outcomes.
In Kenya, gendered disinformation is a key threat to women in leadership positions that have a public-facing aspect. Women journalists are also disproportionately affected, threatening their credibility, independence, and safety.
Increased access to and reliance on digital gadgets have seen millions of Kenyans across demographics get connected online and on different social media platforms. In the context of gendered disinformation and hate speech, a high volume of information can spread quickly through various media.
KICTANet attended a webinar about this problem and identified gaps that a multistakeholder approach can effectively address.
There needs to be more connection between the understanding of rights and responsibility when utilizing social platforms.
The Constitution of Kenya states that it is an offense to deliberately create and spread false or misleading information in the country. However, perpetrators hide behind the freedom of expression and freedom of the media guaranteed by the same constitution. We are, then, using these freedoms to curtail other people’s rights and liberties online.
Governments have enacted various laws to counter hate speech, incitement to violence, and the propagation of false information. However, a rapidly changing communication landscape, owing to technological innovations, has several of these laws always playing catch-up, including in Kenya.
There are growing concerns that our laws could be more active and proactive in containing the problem. For instance, AI deep fakes are a lethal frontier for disinformation and hate speech that has been around for some time; however, no legislation has yet been passed in Kenya to regulate them.
The legal framework to address gendered disinformation and hate speech in the country is disjointed. A coordinated effort to ensure that the legal framework is functional and supports its citizens is needed.
Research and Academia
Stakeholders in academia are uniquely positioned to teach young minds and society about media literacy. Questions have been asked about the scale of effort on this front and the outcomes.
How much research around gendered disinformation and hate speech are we engaging in? In many quarters, funding is usually the cause of limited research.
Limited data on gendered disinformation and hate speech can be countered by enforcing country-wide digital mapping efforts. Mapping will generate up-to-date statistics about who is being affected online and what they are experiencing.
Research by KICTANet and CIPESA found that there is an opportunity to explore the intersection between hate speech, disinformation, misinformation, OGBV, and the country’s economy.
Civil Society Organizations (CSOs)
Training remains key. CSOs can play a critical role in setting up a country team that can support women who are in digital efforts around Africa. There’s also a desire for information on utilizing digital platforms. There are calls for further training on how to best interact with target audiences online.
Efforts to counter gendered disinformation need to be coordinated, not just in terms of legal framework. It is important to harmonize all the ongoing efforts and determine who is responsible for what, who is leading the charge, what the gaps in the efforts are, and what progress has been made so far.
Big Tech Content Moderation
Digital social platforms have policies prohibiting the kind of content used in hate speech and gendered disinformation agendas. These policies, no matter how comprehensively outlined, are not sufficient in identifying and moderating such content. Platform policies are inefficient because of inconsistent enforcement. Furthermore, other than banning or suspending an account, little to no other consequences happen to perpetrators.
Gendered disinformation and hate speech use innovative tactics to outwit moderation enforcement. Such creative techniques include spreading this disinformation in local languages. More often than not, there will not be a human or AI moderator conversant in the language or the context. Disinformation campaigns are highly context-specific.
Moderation techniques, sadly, do not change as fast, and some of the content slips through the dragnets. There is an opportunity for big tech companies to ramp up their content moderation practices.
All of Us
There’s a place for critical thinking in countering gendered disinformation. People inadvertently share a lot of disinformation and hate speech as if it were the truth. While an individual may not have the resources to countercheck massive volumes of manipulated information, we can, in our capacities, be vigilant and analyse the information.
The Disability Intersection
There is a lot to be done in the country to scratch the surface of the intersection of disability, gendered disinformation, and hate speech. For now, however, the concern is whether the existing information available on digital safety campaigns, legal frameworks, and reporting mechanisms is in a format that is perceivable and understandable to people regardless of their varying abilities.
It is important to embed disability and accessibility experts within each aspect so that the outputs can be accessed and helpful to persons with disabilities.
KICTANet, through its gender program, has done work to make women safer online. You can find their work here