Civil Society, Climate Change, Environment, Global, Headlines, Human Rights, Indigenous Rights, Inequality, Labour, TerraViva United Nations
“If Silicon Valley was a country it would probably be the richest in the world. So how genuinely committed is Big Tech and AI to funding and fostering human rights over profits? The barebones truth is that if democracy was profitable, human rights lawyers and defenders including techtivists from civil society organizations wouldn’t be sitting around multistakeholder engagement tables demanding accountability from Big Tech and AI. How invested are they in real social impact centred on rights despite glaring evidence to the contrary?,” asks Nina Sangma, of the Asia Indigenous Peoples Pact, a regional organization founded in 1992 by Indigenous Peoples’ movements with over 40 members across 14 countries in the Asia-Pacific region.
We are currently at a critical juncture where most countries lack a comprehensive AI policy or regulatory framework. The sudden reliance on AI and other digital technologies has introduced new – and often “invisible” – vulnerabilities, and we have just seen the tip of the iceberg, literally melting from the effects of climate change.
Some things we have already seen though: AI is still a product of historical data representing inequities and inequalities. A study analyzing 100+ AI-generated images using Midjourney’s diffusion models revealed consistent biases, including depicting older men for specialized jobs, binary gender representations, featuring urban settings regardless of location, and generating images predominantly reinforcing “ageism, sexism and classism”, with a bias toward a Western perspective.
Data sources continue to be “toxic”. AI tools learn from vast amounts of training data, often consisting of billions of inputs scraped from the internet. This data risks to perpetuate harmful stereotypes and often contains toxic content like pornography, misogyny, violence, and bigotry. Furthermore, researchers found bias in up to 38.6% of ‘facts’ used by AI.
Despite increased awareness, the discourse surrounding AI, like the technology itself, has predominantly been shaped by “Western, whiteness, and wealth”. The discrimination that we see today is the result of a cocktail of “things gone wrong” – ranging from discriminatory hiring practices based on gender and race, to the prevalence of algorithms biases.
“Biases are not a coincidence. Artificial intelligence is a machine that draws conclusions from data based on statistical models, therefore, the first thing it eliminates is variations. And in the social sphere that means not giving visibility to the margins,” declares Judith Membrives i Llorens, head of digital policies at Lafede.cat – Organitzacions per la Justícia Global.
“AI development isn’t the sole concern here. The real issue stems from keeping citizens in the dark, restricting civic freedoms and the prevalence of polarisation and prejudice on several dimensions of our societies. This results in unequal access, prevalent discrimination, and a lack of transparency in technological processes and beyond. Despite acknowledging the potential and power of these technologies, it is clear that many are still excluded and left at the margins due to systemic flaws. Without addressing this, the global development of AI and other emerging technologies won’t be inclusive. Failure to act now and to create spaces of discussion for new visions to emerge, will mean these technologies continue to reflect and exacerbate these disparities,” says Mavalow Christelle Kalhoule, civil society leader in Burkina Faso and across the Sahel region, and Chair of the global civil society network Forus.
The Civil Society Manifesto for Ethical AI asks, what are the potential pitfalls of using current AI systems to inform future decisions, particularly in terms of reinforcing prevailing disparities?
Today, as EU policymakers are expected to close a political agreement for the AI Act, we ask, do international standards for regulating machine learning include the voice of the people? With the Manifesto we explore, challenge, disrupt, and reimagine the underlying assumptions within this discourse but also to broaden the discussion to incorporate communities beyond the traditional “experts.” Nothing about us, without us.
“We want Artificial Intelligence, but created by and for everyone, not only for a few,” adds Judith Membrives i Llorens.
From the “Internet of Cows” to the impact of AI on workers’ rights and on civic space, developed by over 50 civil society organisations, the Manifesto includes 17 case studies on their experiences, visions and stories around AI. With each story, we want to weave a different path to build new visions on AI systems that expand rather than restrict freedoms worldwide.
“The current development of AI is by no means an inevitable path. It is shaped by Big Tech companies because we let them. It is time for the civil society to stand up for their data rights,” says Camilla Lohenoja, of SASK, the workers’ rights organisation of the trade unions of Finland.
“Focusing on ethical and transparent technology also means giving equal attention to the fairness and inclusivity of its design and decision-making processes. The integrity of AI is shaped as much by its development as by its application,” says Hanna Pishchyk of the youth group Digital Grassroots.
Ultimately, the Manifesto aims to trigger a global – and not just sectorial and Western-dominated dialogue – on AI development and application.
Civil society is here not just as a mere token in multistakeholder spaces, we bring forward what others often dismiss, and we actively participate worldwide in shaping a technological future that embraces inclusivity, accountability, and ethical advancements.
Bibbi Abruzzini, Forus and Nina Sangma, Asia Indigenous Peoples Pact (AIPP)
IPS UN Bureau