The Risks Artificial Intelligence Pose for the Global South

Armed Conflicts, Artificial Intelligence, Civil Society, Development & Aid, Featured, Headlines, Human Rights, Sustainable Development Goals, TerraViva United Nations

Artificial Intelligence

UN Secretary General António Guterres addresses the session “Strengthening multilateralism, economic - financial affairs and artificial intelligence” on July 6 at the 17th summit of BRICS in Rio de Janeiro. For the first time ever, artificial intelligence was a major topic of concern at the BRICS summit. Credit: UN Photo/Ana Carolina Fernandes

UN Secretary General António Guterres addresses the session “Strengthening multilateralism, economic – financial affairs and artificial intelligence” on July 6 at the 17th summit of BRICS in Rio de Janeiro. For the first time ever, artificial intelligence was a major topic of concern at the BRICS summit. Credit: UN Photo/Ana Carolina Fernandes

UNITED NATIONS, Jul 14 2025 (IPS) – Artificial intelligence (AI) is rapidly developing and leaving its mark across the globe. Yet the implementation of AI risks widening the gap between the Global North and South.


It is projected that the AI market’s global revenue will increase by 19.6 percent each year. By 2030, AI could contribute USD 15.7 trillion to the global economy. However, the increases to nations’ GDP will be unequally dispersed, with North America and China experiencing the most gains while the Global South gains far less.

The risks of AI to the Global South

Due to smaller capacities to fund research, development and implementation, fewer countries in the Global South are adopting AI technology. Access to affordable AI compute to train AI models is one of the AI field’s greatest barriers to entry in the Global South, according to the 2024 UN report, “Governing AI for Humanity.”

Further, AI is designed to create profitable market extraction that does not benefit the global majority, according to Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation. As countries in the Global North are AI’s primary investors, it is being developed to address their needs.

“The result is a quiet erosion of political and economic autonomy,” he said. “Without deliberate intervention, AI risks becoming a mechanism for reinforcing historical patterns of exploitation through technical means. It also risks losing the incredible value of diverse, globally minded inputs into designing our collective AI future.”

Across the world, people risk losing their jobs to AI, but many countries in the Global South are reliant on labor intensive industries, and AI poses a greater threat to increasing unemployment and poverty. Particularly children, women, youths, people with disabilities, older workers, creatives and people with jobs susceptible to automation are at risk.

According to Daron Acemoglu, professor at the Massachusetts Institute of Technology, labor-replacing AI poses a greater threat to workers in the developing world, as capital-intensive technology may not be useful in these nations where oftentimes capital is scarce and labor is abundant and cheap. Technology that prioritizes labor-intensive production is better suited to their comparative advantage.

“Because advanced economies have no reason to invest in such labor-intensive technologies, the trajectory of technological change will increasingly disfavor poor countries,” he said.

If these trends continue, these nations will experience increased unemployment and fall behind in the deployment of capital-intensive AI, due to limited financial resources and digital skill sets. More AI policies and guidelines, as well as education on data privacy and algorithmic bias, could assist in reducing this inequality.

Evidently, AI threatens to widen the gap between the Global North and South, as AI capacities are consolidated within a small group of institutions and regions. In Dhar’s view, AI will need to be designed to serve people and problems rather than be focused on profit maximization.

“If left unaddressed, this imbalance will cement a way of thinking about the world that mirrors the development of the Internet or social media – a process we do not want to replicate,” Dhar said.

Opportunities of the new technology

But the development of AI also poses opportunities for the Global South.

AI could design context-specific systems for local areas in the Global South that are not just based on the Global North, according to Dhar. “It can unlock new models of inclusion and resilience,” he said.

For example, AI could aid farmers in decision-making by informing them of weather and drought predictions using geospatial intelligence, as well as of marketing price information. AI could also help train farmers and other producers. It can also be used to improve education and healthcare in nations where these are major issues harming their populations and stunting development.

Acemoglu said that AI should be developed to complement rather than replace human labor for these benefits to become possible. “That will require forward-looking leadership on the part of policymakers,” he said.

AI in conflict

AI is also starting to make an appearance in conflict. In Ukraine, autonomous drones are being used, which are capable of tracking and engaging enemies, as well as BAD.2 model robot dogs, which are ground drones that can survey areas for enemies. Autonomous machine guns are also used, in which AI helps spot and target enemies.

The use of AI in conflict poses an ethical dilemma. AI could protect human lives on one side of the conflict but pose a great threat to the lives on the other end of the battlefield. This also raises the question of whether AI should be given the power to engage in harm.

But perhaps the use of AI can reduce the number of people engaging in conflicts harming developing countries and move these people to other sectors where they can realize more potential and aid their country’s economic development.

What international frameworks should do

Clear international frameworks must be established to prevent a rise in inequality and a greater gap between the Global North and South.

For the first time ever, AI was a major topic of discussion at the 17th BRICS summit, which serves as a coordination forum for nations from the Global South, in Rio de Janeiro. BRICS member countries signed the Leaders’ Declaration on Global Governance of Artificial Intelligence, which presents guidelines to ensure AI is developed and used responsibly to advance sustainability and inclusive growth.

The declaration called on members of the UN to promote including emerging markets and developing countries (EMDCs) and the Global South in decision-making regarding AI.

“New technologies must operate under a governance model that is fair, inclusive, and equitable. The development of AI must not become a privilege for a handful of countries, nor a tool of manipulation in the hands of millionaires,” Brazilian president Luiz Inácio Lula da Silva said at the summit.

However, the UN report “Governing AI for Humanity” found that 118 countries, most of which are in the Global South, were not part of a sample of non-UN AI governance initiatives, while seven countries, all of which are in the Global North, were included in all initiatives.

According to Dhar, global governance must create a more equitable distribution of power that entails sharing ownership and embedding the Global South at every level of institutions, agreements and investments, rather than simply for consultation. These nations must also be aided in building capacity, sharing infrastructure, scientific discovery and participation in creating global frameworks, he said.

In his remarks at the BRICS summit, UN Secretary-General António Guterres expressed his concern over the weaponization of AI and stressed the importance of AI governance that is focused on equity. He said in order for this to be done, the current “multipolar world” must be addressed.

“We cannot govern AI effectively—and fairly—without confronting deeper, structural imbalances in our global system,” Guterres said.

Dhar emphasized that the inclusion of every person in the conversation on AI is crucial to creating legitimate global technological governance.

The future of AI is being negotiated with immediacy and urgency,” Dhar said. “Whether it becomes a force for collective progress or a new vector for inequality depends on who is empowered to shape it.”

IPS UN Bureau Report

  Source

‘The Closure of Meta’s US Fact-Checking Programme Is a Major Setback in the Fight Against Disinformation’

Artificial Intelligence, Civil Society, Education, Featured, Global, Headlines, Press Freedom, TerraViva United Nations

Jan 24 2025 (IPS) –  
CIVICUS speaks with Olivia Sohr about the challenges of disinformation and the consequences of the closure of Meta’s fact-checking programme in the USA. Olivia is the Director of Impact and New Initiatives at Chequeado, an Argentine civil society organisation working since 2010 to improve the quality of public debate through fact-checking, combating disinformation, promoting access to information and open data.


Olivia Sohr

In January 2025, Meta, the company that owns Facebook, Instagram and WhatsApp, announced the suspension of its US data verification programme. Instead, the company will implement a system where users can report misleading content. The decision came as Meta prepared for the start of the new Trump presidency. Explaining the change, Meta CEO Mark Zuckerberg said the company was trying to align itself with its core value of free speech. Meta also plans to move some of its content moderation operations from California to Texas, which it says is in response to concerns about potential regional bias.

What led to Meta’s decision to end its fact-checking programme?

While the exact details of the process that led to this decision are unknown, in his announcement Zuckerberg alluded to a ‘cultural shift’ that he said was cemented in the recent US election. He also expressed concern that the fact-checking system had contributed to what he saw as an environment of ‘excessive censorship’. As an alternative, Zuckerberg is proposing a community rating system to identify fake content.

This decision is a setback for information integrity around the world. Worryingly, Meta justifies its position by equating fact-checking journalism with censorship. Fact-checking is not censorship; it’s a tool that provides data and context to enable people to make informed decisions in an environment where disinformation is rife. Decisions like this increase opacity and hamper the work of those focused on combatting disinformation.

The role of fact-checkers in Meta is to investigate and label content that is found to be false or misleading. However, decisions about the visibility or reach of such content will be made solely by the platform, which has assured that it will only reduce exposure and add context, not remove or censor content.

How the community grading system will work has not yet been specified, but the prospects are not promising. Experience from other platforms suggests that these models tend to increase disinformation and the spread of other harmful content.

What are the challenges of fact-checking journalism?

Fact-checking is extremely challenging. While those pushing disinformation can quickly create and spread completely false content designed to manipulate emotions, fact-checkers must follow a rigorous and transparent process that is time-consuming. They must constantly adapt to new and increasingly sophisticated disinformation strategies and techniques, which are proliferating through the use of artificial intelligence.

Meta’s decision to end its US verification programme makes our task even more difficult. One of the key benefits of this programme is that it has allowed us to reach out directly to those who spread disinformation, alerting them with verified information and stopping the spread at the source. Losing this tool would be a major setback in the fight against disinformation.

What are the potential consequences of this change?

Meta’s policy change could significantly weaken the information ecosystem, making it easier for disinformation and other harmful content to reach a wider audience. For Chequeado, this means we will have to step up our efforts to counter disinformation, within the platform and in other spaces.

In this scenario, verification journalism is essential, but it will be necessary to complement this work with media literacy initiatives, the promotion of critical thinking, the implementation of technological tools to streamline the work and research to identify patterns of disinformation and the vulnerability of different groups to fake news.

GET IN TOUCH
Website
Instagram
Twitter

SEE ALSO
BRAZIL: ‘The focus should be on holding social media companies accountable, not punishing individual users’ CIVICUS Lens | Interview with Iná Jost 01.Oct.2024

‘It’s easier and cheaper than ever to spread disinformation on a massive scale’ CIVICUS Lens | Interview with Imran Ahmed 21.Sep.2024

UK: ‘Social media platforms have become breeding grounds for far-right ideologies’ CIVICUS Lens | Interview with Kulvinder Nagre 19.Aug.2024

  Source