Time magazine has dubbed 2024 a “super election year”. An astonishing 4 billion people are eligible to vote in countries across the world this year. Many are on the African continent, where presidential, parliamentary and general elections have already been held or are set for the latter half of the year.
Artificial intelligence (AI) will play a major role in many countries’ elections. In fact, it already does. AI systems are used in a number of ways. They analyse large amounts of data, like voter patterns. They run automated chatbots for voter engagement. They authenticate voters and detect cyber threats.
But many pundits and ordinary people alike seem unsure what to make of the use of AI in African electoral processes. It is often described as simultaneously promising and perilous.
We are experts on transnational governance whose ongoing research aims to define the challenges AI could pose to legitimate governance in Africa. We want to help create a base of empirical evidence that the continent’s electoral bodies can use to harness the potential benefits of AI and similar technologies while not ignoring the risks.
The effects of AI on electoral democracy in Africa will fundamentally depend on two factors. First, popular legitimacy and trust in AI. Second, the capability of African states to govern, regulate, and enforce oversight on the use of AI by all political stakeholders, including ruling and opposition parties.
Varied examples
It is too simplistic to say that the use of AI in elections is all good or all bad. The truth is that it can be both, fundamentally depending on two key factors: the public’s trust in AI and the ability of African states to regulate the use of AI by key stakeholders.
Identity politics, diversity, and digital illiteracy must also be taken into account. These all play a role in the rise of polarisation and whether political constituencies are particularly susceptible to disinformation and misinformation.
For instance, during Kenya’s 2017 election, consulting firm Cambridge Analytica allegedly used AI to target voters with disinformation. This potentially influenced the outcome.
In South Africa, there is increasing awareness that anonymous influencers, often positioned at the extremes of the political spectrum, contribute significantly to online misinformation and disinformation. These figures, who largely remain unknown to the general public, introduce highly emotive and polarising content into discussions without live and adequate moderation – often through automated processes.
But AI also has the potential to enhance electoral legitimacy. Kenya’s 2022 Umati project monitored social media for hate speech using computerised analysis known as natural language processing. Once harmful content had been flagged by the AI it was removed. During Sierra Leone’s 2021 general election its Election Monitoring and Analysis Platform identified and countered hate speech, disinformation and violence incitement.
Read more:
Algorithms are moulding and shaping our politics. Here’s how to avoid being gamed
Similarly, in South Africa’s latest polls AI-powered bots were used to mitigate the spread of disinformation.
Elsewhere, facial recognition technology was used during Ghana’s 2020 general election to verify voters and prevent impersonation. Nigeria’s 2019 Automated Fingerprint Identification System detected duplicate registrations, bolstering the accuracy of voter rolls.
Lessons and challenges
These cases offer valuable lessons for other countries on the continent – both in what works and what doesn’t.
There are several obstacles electoral governance bodies must overcome in most African countries. One is a scarcity of skilled professionals in data science and machine learning. Limited technological infrastructure is another. There are also regulatory and policy gaps to be overcome.
Read more:
Deepfakes in South Africa: protecting your image online is the key to fighting them
And ethical concerns cannot be ignored. For example, Kenya’s Huduma Namba national ID system and Nigeria’s telecommunication companies have been criticised for inadequate data protection. They’ve also been accused of using AI technology for surveillance.
In South Africa, a 2021 lawsuit took on Facebook for allegedly violating users’ privacy rights.
African countries need to allay people’s very valid concerns about ethics and data privacy in election technology. Part of doing so involves the development of robust normative, institutional and collaborative frameworks to govern the use of AI in fair, transparent and accountable ways. African states must seek to exercise sovereignty on AI systems – that is, they need to develop their own systems, fit for local purposes, rather than just importing systems from elsewhere.
The frameworks we’re describing should include clear guidelines to promote African cultural values that protect human rights. They must also be designed to prevent the misuse of AI for electoral manipulation or suppression of political opposition.
Public trust in AI systems can also be built in a number of ways. These include public awareness and education campaigns. Transparency and accountability mechanisms that impose sanctions and provide remedies when breaches of trust and law occur are also crucial.
Examples exist
Several initiatives already exist from which the kind of frameworks we describe can be drawn. One example is the Association of African Electoral Authorities’ Principles and Guidelines for the Use of Digital and Social Media in Elections in Africa.
A number of African countries are already working to address the challenges and opportunities presented by AI and to develop appropriate governance mechanisms. Egypt’s National Council for Artificial Intelligence and Kenya’s Distributed Ledger and Artificial Intelligence Taskforce are examples of ongoing initiatives from which other countries’ electoral bodies can learn.
Overall, solid governance will be crucial for the successful integration of AI systems in promoting the legitimacy of African political processes.
The post “AI can make African elections more efficient – but trust must be built and proper rules put in place” by Shamira Ahmed, Policy Leader Fellow, Florence School of Transnational Governance, European University Institute was published on 06/17/2024 by theconversation.com