Signed by a coalition of around 100 organisations including Sherpa, Amnesty International and the LDH (Liguedes droits de l’homme), this opinion piece calls for human rights and environmental justice to stay at the core of artificial intelligence regulations.
As France prepares to host the Artificial Intelligence (Al) Action Summit, more than 100 civil society organisations are sounding the alarm: human rights and environmental protection must stop being sacrificed on the altar of technological progress. Current Al developments perpetuate discriminations, exacerbate inequalities, destroy the planet, and fuel a system of global exploitation. As these issues will not be included in the Summit’s official program, we’ve outlined them here.
As Al technologies are developing rapidly, political leaders seem in no hurry to consider the human, social and environmental issues they raise. Ignoring the warnings of civil society organisations, they prefer to view them solely through the prism of growth, productivity gains, and profit.
The potential future existential risks posed by Al are a distraction: these technologies already have very concrete impacts on the most vulnerable and discriminated populations, and already undermine human rights. As they are built on biased datasets and embed the skewed worldviews of their designers, Al tools perpetuate stereotypes. reinforce social inequality and limit access to resources and opportunities. Moreover, Al systems are deployed within the discriminatory and unequal structures that exist in every society. Their uses, often against a backdrop of austerity policies, deepen inequalities in access to health, employment, public services and social benefits. The scandals that have erupted in recent years are clear evidence of this: health algorithms with sexist and racist biases, an Austrian employment service algorithm refusing to direct women towards the IT sector, profiling and discrimination against welfare beneficiaries in France, Denmark and the Netherlands.
Yet technologies are rarely the solution to fundamentally systemic problems. It would be better to address the root causes of these issues rather than risk exacerbating human rights violations with Al systems. As more decisions are entrusted to algorithms, their biases can have dramatic consequences on our lives. Predictive Al systems are increasingly used in justice and law enforcement, with the risk of amplifying systemic racism. For instance, in the United States, an Al tool used to calculate recidivism risks identified Black defendants as ‘high risk’ twice as often as white defendants. But even if these biases were mitigated, focusing on predictive tools distracts us from considering broader reforms to the prison system.
These systems are also used for surveillance and identification purposes in border control or conflict settings, such as Lavender, an Al tool to target terrorists that caused the deaths of thousands of Gaza civilians. Often, these technologies are developed in the Global North, like the tools created in Europe used to surveil the Uyghur population in China.
Generative Al systems are also exploited for disinformation and destabilization purposes by repressive regimes and private actors. Bots to manipulate information on health-related issues, racist disinformation during the last European elections, and audio and video deepfakes featuring electoral candidates are just some examples of how these technologies pose threats to the rule of law. Al-generated content also endangers women and children: 96% of these deepfakes are non-consensual sexual content, widely used to harm women and produce child sexual abuse material.
Moreover, these impacts are part of a global system of exploitation. Al, particularly generative Al, is an environmental disaster. By 2027, generative Al will require as much electricity as what is consumed by countries like Argentina or the Netherlands. The carbon emissions of Big Tech increased by 30 to 50% in 2024 due to the rapid development of these technologies. And the Global South is the most affected, with the proliferation of data centres and the extraction of minerals like cobalt (used in batteries, for instance), harms the health of populations, pollutes water and soil, and fuels violence and armed conflicts.
Inequalities between the Global North and South are also exacerbated by technologies used for online content moderation. Digital giants allocate more resources to the Global North, favouring certain dominant languages and cultural narratives at the expense of others. Not to mention that Al systems are predominantly trained by exploited and underpaid workers from the Global South. For example, OpenAI paid Kenyan workers less than two dollars an hour to conduct the violent and taxing job of labelling toxic content.
In light of these colossal issues, the European Al Act, presented as an instrument to protect rights and freedoms, falls short, particularly on issues of surveillance and predictive policing. Moreover, this regulation will not apply beyond the borders of the European Union, even though the threats to human rights and the environment are global, and the export of surveillance Al generates profits for European companies. While European governments call for “sovereignty” in Al, the challenges posed by these systems transcend borders. Far from being merely a technological issue, Al concerns everyone. Everyone should have the ability to shape its development-or reject it if it does not align with our vision of society. True progress lies in binding frameworks, democratic developments, and approaches centring international solidarity and the most affected communities, in order to place human rights and environmental justice at the core of Al regulation.