Media has long moulded our perception of Artificial Intelligence (AI), often portraying it as a replacement for everything ‘human’. Generative AI tools, such as ChatGPT, ChatSonic, ChatPDF have been making headlines for their potential to revolutionise content creation in various industries, including journalism.
However, concerns about bias and misinformation have also arisen, prompting policy makers, tech users, and journalists to urgently call for the establishment of guidelines for AI development and deployment in newsrooms
This year, we commemorated World Press Freedom Day under the theme “Shaping a Future of Rights: Freedom of expression as a driver for all other human rights.” The theme timely for journalists, policymakers, and the public to reconsider the significance of freedom of expression in the digital age.
World Press Freedom day was established by the United Nations in 1993, to raise awareness of press freedom’s importance and remind governments of their duty to uphold the right to freedom of expression as enshrined in Article 19 of the Universal Declaration of Human Rights (UDHR). However, in an ever-changing world of communication technologies, the press has witnessed a remarkable transformation, adapting to new technologies that have emerged over time.
From the age of the telegraph to the digital era, where social media platforms and online forums form the backbone of political discourse, journalism has continuously evolved. As Artificial Intelligence (AI) gains prominence in journalism, crucial questions should be asked:
Can AI truly enhance the field of journalism and uphold the right to freedom of expression particularly on the African continent, or does it pose a threat to journalistic integrity and the essence of human rights? Furthermore, to what extent do biases inherent in AI systems, particularly data, impact the representation and dissemination of information in African newsrooms? Is it possible for AI to overcome these biases and contribute positively to the media landscape in Africa?
Generative AI: A Double-Edged Sword for Newsrooms
Through AI, humans aim to create intelligent machines; capable of performing tasks typically requiring human intelligence, such as decision-making, language translation, and problem-solving. AI operates by using algorithms and sets of instructions that direct these machines’ actions. Algorithms are simply defined as a process or set of rules to be followed. These algorithms then process information, identify patterns and make predictions. For this to happen, ample data or information needs to be fed to machines in order for them to generate responses.
On the other hand, generative AI is a subset of AI which employs algorithms to produce content from scratch. Newsrooms are increasingly adopting it for its potential benefits, such as faster content creation, fact-checking and research assistance, and its apparent multilingual capabilities. However, generative AI raises concerns about biases and ethical dilemmas, particularly in African newsrooms. While AI can be trained to imitate human behaviour and preferences, it may also adopt human biases.
Scholars like Safiya Umoja Noble argue that technology is not neutral and that the biases present in AI systems often reflect the biases of their creators. Thus, by involving a diverse group of developers, journalists, policy makers and users in the design and implementation of AI systems, we can better identify and address potential biases and inequalities. This can lead to AI algorithms that are less likely to perpetuate harmful stereotypes or marginalised certain groups.
Is AI a force for good or a threat to the essence of journalism?
AI has the potential to revolutionise the way news is created, disseminated, and tailored in Africa, which could foster increased diversity and accessibility, so argues some of its current makers and supporters. As I was writing this article, I was tempted to ask OpenAI’s ChatGPT the three benefits of using AI in newsrooms, below was its answer:
- Accelerated content creation
- Fact-checking and research assistance
- Multilingual capabilities
At onset, these are good and commentable benefits and they can greatly aid newsrooms. As an African tech consumer however, these questions needed further examination. There are various problems with some of these claimed benefits as they relate to the African context, particularly in newsrooms:
Fact-Checking and Research Assistance
AI models need information, or data, to generate a response and currently most data used to train AI models is often not representative of the African continent. For example, Chatbots requires extensive information on Zambia to write about Gender-Based Violence in Chibolya, as well as real-time, updated data to ensure accuracy, most lack such information.
Meredith Broussard’s book “Artificial Unintelligence: How Computers Misunderstand the World” warns against the uncritical adoption of technology and AI to address social issues, emphasising that computers have limitations and that relying solely on technology can reproduce discriminatory outcomes.
AI systems trained on data that is not representative can inadvertently perpetuate discrimination and injustice. This is especially troubling in the context of journalism, where bias AI-driven content could further marginalise underrepresented communities and distort public discourse. Journalists must remain vigilant as gatekeepers, ensuring the accuracy and fairness of news stories before publication or broadcast.
Information gaps in AI are already causing problems, highlighting the need for African newsrooms to create their own AI imaginaries. By developing prototypes and/or forging partnerships with tech companies, African newsrooms can begin to address some of these challenges. African journalists should not only be consumers of AI but auditors and creators of such technologies, this should be our call during this world press freedom day.
Though these models claim to interact in various languages, their interaction in African languages is limited. This is problematic as Africa has an estimated population of over 1.4 billion people and 54 countries. Although most countries in Africa either speak English, French, Spanish or Portuguese, these are colonially enforced and Africa is home to over 2,000 languages. The question that the press should ask: With such diversity in Africa, whose knowledge is represented and how is it disseminated through AI?
To ensure that the technology serves as a force for good, it is crucial to address the potential biases and inequalities that might be perpetuated by its algorithms. By embracing a human-centred approach to AI development and fostering greater inclusivity and awareness, we can create a more equitable, informed, and just future for all. By fostering greater representation and inclusivity in AI systems, we can work towards dismantling unfair systems and create a more equitable media landscape that respects the rights of all humans.
Journalists, policymakers, and the public must come together to ensure that the integration of AI in journalism serves to strengthen the right to freedom of expression rather than weaken it. After all, history has shown us that the mode of communication may change, but the message should remain the same.
Emsie Erastus is a Digital Rights Specialist at Internews based in Lusaka, Zambia.