From Algorithms to Accountability: What Global AI Governance Should Look Like

Artificial Intelligence, Civil Society, Featured, Global, Global Governance, Headlines, Human Rights, International Justice, IPS UN: Inside the Glasshouse, TerraViva United Nations

Opinion

The International Telecommunication Union (ITU) is a specialized agency of the United Nations. Credit: ITU/Rowan Farrell

 
Artificial intelligence holds vast potential but poses grave risks, if left unregulated, UN Secretary-General António Guterres told the Security Council on September 24.

ABUJA, Nigeria, Oct 14 2025 (IPS) Recent research from Stanford’s Institute for Human-Centered AI warns that bias in artificial intelligence remains deeply rooted even in models designed to avoid it and can worsen as models grow. From bias in hiring of men over women for leadership roles, to misclassification of darker-skinned individuals as criminals, the stakes are high.


Yet it’s simply not attainable for annual dialogues and multilateral processes as recently provisioned for in Resolution A/RES/79/325 for the UN to keep up to pace with AI technological developments and the cost of this is high.

Hence for accountability purposes and to increase the cost of failure, why not give Tech Companies whose operations are now state-like, participatory roles at the UNGA?

When AI Gets It Wrong: 2024’s Most Telling Cases

In one of the most significant AI discrimination cases moving through the courts, the plaintiff alleges that Workday’s popular artificial intelligence (AI)-based applicant recommendation system violated federal antidiscrimination laws because it had a disparate impact on job applicants based on race, age, and disability.

Judge Rita F. Lin of the US District Court for the Northern District of California ruled in July 2024 that Workday could be an agent of the employers using its tools, which subjects it to liability under federal anti-discrimination laws. This landmark decision means that AI vendors, not just employers, can be held directly responsible for discriminatory outcomes.

In another case, the University of Washington researchers found significant racial, gender, and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated names over equally qualified candidates with names associated with other racial groups.

In 2024, a University of Washington study investigated gender and racial bias in resume-screening AI tools. The researchers tested a large language model’s responses to identical resumes, varying only the names to suggest different racial and gender identities.

The financial impact is staggering.

A 2024 DataRobot survey of over 350 companies revealed: 62% lost revenue due to AI systems that made biased decisions, proving that discriminatory AI isn’t just a moral failure—it’s a business disaster. It’s too soon for an innovation to result in such losses.

Time is running out.

A 2024 Stanford analysis of vision-language models found that increasing training data from 400 million to 2 billion images made larger models up to 69% more likely to label Black and Latino men as criminals. In large language models, implicit bias testing showed consistent stereotypes: women were more often linked to humanities over STEM, men were favored for leadership roles, and negative terms were disproportionately associated with Black individuals.

The UN needs to take action now before these predictions turn into reality. And frankly, the UN cannot keep up with the pace of these developments.

What the UN Can—and Must—Do

To prevent AI discrimination, the UN must lead by example and work with governments, tech companies, and civil society to establish global guardrails for ethical AI.

Here’s what that could look like:

Working with Tech Companies: Technology companies have become the new states and should be treated as such. They should be invited to the UN table and granted participatory privileges that both ensure and enforce accountability.

This would help guarantee that the pace of technological development—and its impacts—is self-reported before UN-appointed Scientific Panels reconvene. As many experts have noted, the intervals between these annual convenings are already long enough for major innovations to slip past oversight.

Developing Clear Guidelines: The UN should push for global standards on ethical AI, building on UNESCO’s Recommendation and OHCHR’s findings. These should include rules for inclusive data collection, transparency, and human oversight.

Promoting Inclusive Participation: The people building and regulating AI must reflect the diversity of the world. The UN should set up a Global South AI Equity Fund to provide resources for local experts to review and assess tools such as LinkedIn’s NFC passport verification.

Working with Africa’s Smart Africa Alliance, the goal would be to create standards together that make sure AI is designed to benefit communities that have been hit hardest by biased systems. This means including voices from the Global South, women, people of color, and other underrepresented groups in AI policy conversations.

Requiring Human Rights Impact Assessments: Just like we assess the environmental impact of new projects, we should assess the human rights impact of new AI systems—before they are rolled out.

Holding Developers Accountable: When AI systems cause harm, there must be accountability. This includes legal remedies for those who are unfairly treated by AI. The UN should create an AI Accountability Tribunal within the Office of the High Commissioner for Human Rights to look into cases where AI systems cause discrimination.

This tribunal should have the authority to issue penalties, such as suspending UN partnerships with companies that violate these standards, including cases like Workday.

Support Digital Literacy and Rights Education: Policy makers and citizens need to understand how AI works and how it might impact their rights. The UN can help promote digital literacy globally so that people can push back against unfair systems.

Lastly, there has to be Mandates for intersectional or Multiple Discriminations Audits: AI systems should be required to go through intersectional audits that check for combined biases, such as those linked to race, disability, and gender. The UN should also provide funding to organizations to create open-source audit tools that can be used worldwide.

The Road Ahead

AI is not inherently good or bad. It is a tool, and like any tool, its impact depends on how we use it. If we are not careful, AI could lengthen problem-solving time, deepen existing inequalities, and create new forms of discrimination that are harder to detect and harder to fix.

But if we take action now—if we put human rights at the center of AI development—we can build systems that uplift, rather than exclude.

The UN General Assembly meetings may have concluded for this year, the era of ethical AI has not. The United Nations remains the organization with the credibility, the platform, and the moral duty to lead this charge. The future of AI—and the future of human dignity—may depend on it.

Chimdi Chukwukere is an advocate for digital justice. His work explores the intersection of technology, governance, Big Tech, sovereignty and social justice. He holds a Masters in Diplomacy and International Relations from Seton Hall University and has been published at Inter Press Service, Politics Today, International Policy Digest, and the Diplomatic Envoy.

IPS UN Bureau

  Source

The Risks Artificial Intelligence Pose for the Global South

Armed Conflicts, Artificial Intelligence, Civil Society, Development & Aid, Featured, Headlines, Human Rights, Sustainable Development Goals, TerraViva United Nations

Artificial Intelligence

UN Secretary General António Guterres addresses the session “Strengthening multilateralism, economic - financial affairs and artificial intelligence” on July 6 at the 17th summit of BRICS in Rio de Janeiro. For the first time ever, artificial intelligence was a major topic of concern at the BRICS summit. Credit: UN Photo/Ana Carolina Fernandes

UN Secretary General António Guterres addresses the session “Strengthening multilateralism, economic – financial affairs and artificial intelligence” on July 6 at the 17th summit of BRICS in Rio de Janeiro. For the first time ever, artificial intelligence was a major topic of concern at the BRICS summit. Credit: UN Photo/Ana Carolina Fernandes

UNITED NATIONS, Jul 14 2025 (IPS) – Artificial intelligence (AI) is rapidly developing and leaving its mark across the globe. Yet the implementation of AI risks widening the gap between the Global North and South.


It is projected that the AI market’s global revenue will increase by 19.6 percent each year. By 2030, AI could contribute USD 15.7 trillion to the global economy. However, the increases to nations’ GDP will be unequally dispersed, with North America and China experiencing the most gains while the Global South gains far less.

The risks of AI to the Global South

Due to smaller capacities to fund research, development and implementation, fewer countries in the Global South are adopting AI technology. Access to affordable AI compute to train AI models is one of the AI field’s greatest barriers to entry in the Global South, according to the 2024 UN report, “Governing AI for Humanity.”

Further, AI is designed to create profitable market extraction that does not benefit the global majority, according to Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation. As countries in the Global North are AI’s primary investors, it is being developed to address their needs.

“The result is a quiet erosion of political and economic autonomy,” he said. “Without deliberate intervention, AI risks becoming a mechanism for reinforcing historical patterns of exploitation through technical means. It also risks losing the incredible value of diverse, globally minded inputs into designing our collective AI future.”

Across the world, people risk losing their jobs to AI, but many countries in the Global South are reliant on labor intensive industries, and AI poses a greater threat to increasing unemployment and poverty. Particularly children, women, youths, people with disabilities, older workers, creatives and people with jobs susceptible to automation are at risk.

According to Daron Acemoglu, professor at the Massachusetts Institute of Technology, labor-replacing AI poses a greater threat to workers in the developing world, as capital-intensive technology may not be useful in these nations where oftentimes capital is scarce and labor is abundant and cheap. Technology that prioritizes labor-intensive production is better suited to their comparative advantage.

“Because advanced economies have no reason to invest in such labor-intensive technologies, the trajectory of technological change will increasingly disfavor poor countries,” he said.

If these trends continue, these nations will experience increased unemployment and fall behind in the deployment of capital-intensive AI, due to limited financial resources and digital skill sets. More AI policies and guidelines, as well as education on data privacy and algorithmic bias, could assist in reducing this inequality.

Evidently, AI threatens to widen the gap between the Global North and South, as AI capacities are consolidated within a small group of institutions and regions. In Dhar’s view, AI will need to be designed to serve people and problems rather than be focused on profit maximization.

“If left unaddressed, this imbalance will cement a way of thinking about the world that mirrors the development of the Internet or social media – a process we do not want to replicate,” Dhar said.

Opportunities of the new technology

But the development of AI also poses opportunities for the Global South.

AI could design context-specific systems for local areas in the Global South that are not just based on the Global North, according to Dhar. “It can unlock new models of inclusion and resilience,” he said.

For example, AI could aid farmers in decision-making by informing them of weather and drought predictions using geospatial intelligence, as well as of marketing price information. AI could also help train farmers and other producers. It can also be used to improve education and healthcare in nations where these are major issues harming their populations and stunting development.

Acemoglu said that AI should be developed to complement rather than replace human labor for these benefits to become possible. “That will require forward-looking leadership on the part of policymakers,” he said.

AI in conflict

AI is also starting to make an appearance in conflict. In Ukraine, autonomous drones are being used, which are capable of tracking and engaging enemies, as well as BAD.2 model robot dogs, which are ground drones that can survey areas for enemies. Autonomous machine guns are also used, in which AI helps spot and target enemies.

The use of AI in conflict poses an ethical dilemma. AI could protect human lives on one side of the conflict but pose a great threat to the lives on the other end of the battlefield. This also raises the question of whether AI should be given the power to engage in harm.

But perhaps the use of AI can reduce the number of people engaging in conflicts harming developing countries and move these people to other sectors where they can realize more potential and aid their country’s economic development.

What international frameworks should do

Clear international frameworks must be established to prevent a rise in inequality and a greater gap between the Global North and South.

For the first time ever, AI was a major topic of discussion at the 17th BRICS summit, which serves as a coordination forum for nations from the Global South, in Rio de Janeiro. BRICS member countries signed the Leaders’ Declaration on Global Governance of Artificial Intelligence, which presents guidelines to ensure AI is developed and used responsibly to advance sustainability and inclusive growth.

The declaration called on members of the UN to promote including emerging markets and developing countries (EMDCs) and the Global South in decision-making regarding AI.

“New technologies must operate under a governance model that is fair, inclusive, and equitable. The development of AI must not become a privilege for a handful of countries, nor a tool of manipulation in the hands of millionaires,” Brazilian president Luiz Inácio Lula da Silva said at the summit.

However, the UN report “Governing AI for Humanity” found that 118 countries, most of which are in the Global South, were not part of a sample of non-UN AI governance initiatives, while seven countries, all of which are in the Global North, were included in all initiatives.

According to Dhar, global governance must create a more equitable distribution of power that entails sharing ownership and embedding the Global South at every level of institutions, agreements and investments, rather than simply for consultation. These nations must also be aided in building capacity, sharing infrastructure, scientific discovery and participation in creating global frameworks, he said.

In his remarks at the BRICS summit, UN Secretary-General António Guterres expressed his concern over the weaponization of AI and stressed the importance of AI governance that is focused on equity. He said in order for this to be done, the current “multipolar world” must be addressed.

“We cannot govern AI effectively—and fairly—without confronting deeper, structural imbalances in our global system,” Guterres said.

Dhar emphasized that the inclusion of every person in the conversation on AI is crucial to creating legitimate global technological governance.

The future of AI is being negotiated with immediacy and urgency,” Dhar said. “Whether it becomes a force for collective progress or a new vector for inequality depends on who is empowered to shape it.”

IPS UN Bureau Report

  Source

‘The Closure of Meta’s US Fact-Checking Programme Is a Major Setback in the Fight Against Disinformation’

Artificial Intelligence, Civil Society, Education, Featured, Global, Headlines, Press Freedom, TerraViva United Nations

Jan 24 2025 (IPS) –  
CIVICUS speaks with Olivia Sohr about the challenges of disinformation and the consequences of the closure of Meta’s fact-checking programme in the USA. Olivia is the Director of Impact and New Initiatives at Chequeado, an Argentine civil society organisation working since 2010 to improve the quality of public debate through fact-checking, combating disinformation, promoting access to information and open data.


Olivia Sohr

In January 2025, Meta, the company that owns Facebook, Instagram and WhatsApp, announced the suspension of its US data verification programme. Instead, the company will implement a system where users can report misleading content. The decision came as Meta prepared for the start of the new Trump presidency. Explaining the change, Meta CEO Mark Zuckerberg said the company was trying to align itself with its core value of free speech. Meta also plans to move some of its content moderation operations from California to Texas, which it says is in response to concerns about potential regional bias.

What led to Meta’s decision to end its fact-checking programme?

While the exact details of the process that led to this decision are unknown, in his announcement Zuckerberg alluded to a ‘cultural shift’ that he said was cemented in the recent US election. He also expressed concern that the fact-checking system had contributed to what he saw as an environment of ‘excessive censorship’. As an alternative, Zuckerberg is proposing a community rating system to identify fake content.

This decision is a setback for information integrity around the world. Worryingly, Meta justifies its position by equating fact-checking journalism with censorship. Fact-checking is not censorship; it’s a tool that provides data and context to enable people to make informed decisions in an environment where disinformation is rife. Decisions like this increase opacity and hamper the work of those focused on combatting disinformation.

The role of fact-checkers in Meta is to investigate and label content that is found to be false or misleading. However, decisions about the visibility or reach of such content will be made solely by the platform, which has assured that it will only reduce exposure and add context, not remove or censor content.

How the community grading system will work has not yet been specified, but the prospects are not promising. Experience from other platforms suggests that these models tend to increase disinformation and the spread of other harmful content.

What are the challenges of fact-checking journalism?

Fact-checking is extremely challenging. While those pushing disinformation can quickly create and spread completely false content designed to manipulate emotions, fact-checkers must follow a rigorous and transparent process that is time-consuming. They must constantly adapt to new and increasingly sophisticated disinformation strategies and techniques, which are proliferating through the use of artificial intelligence.

Meta’s decision to end its US verification programme makes our task even more difficult. One of the key benefits of this programme is that it has allowed us to reach out directly to those who spread disinformation, alerting them with verified information and stopping the spread at the source. Losing this tool would be a major setback in the fight against disinformation.

What are the potential consequences of this change?

Meta’s policy change could significantly weaken the information ecosystem, making it easier for disinformation and other harmful content to reach a wider audience. For Chequeado, this means we will have to step up our efforts to counter disinformation, within the platform and in other spaces.

In this scenario, verification journalism is essential, but it will be necessary to complement this work with media literacy initiatives, the promotion of critical thinking, the implementation of technological tools to streamline the work and research to identify patterns of disinformation and the vulnerability of different groups to fake news.

GET IN TOUCH
Website
Instagram
Twitter

SEE ALSO
BRAZIL: ‘The focus should be on holding social media companies accountable, not punishing individual users’ CIVICUS Lens | Interview with Iná Jost 01.Oct.2024

‘It’s easier and cheaper than ever to spread disinformation on a massive scale’ CIVICUS Lens | Interview with Imran Ahmed 21.Sep.2024

UK: ‘Social media platforms have become breeding grounds for far-right ideologies’ CIVICUS Lens | Interview with Kulvinder Nagre 19.Aug.2024

  Source