Civil society perspectives on AI in the EU

Robot hand holding EU flag.

As part of the WISERD study ‘New arenas for civic expansion: humans, animals, and Artificial Intelligence (AI) we presented new research at a WHEB event in Brussels last month, that reveals the views and concerns of civil society organisations (CSOs) in relation to Artificial Intelligence (AI) in the EU.

The European Commission is legislating to coordinate the regulatory framework for AI and the potential implications of the new technology are profound. For example, one account has concluded that AI is ‘contributing to a transformation of society happening ten times faster and at 300 times the scale… of the Industrial Revolution’.[1]

Addressing the gap

The Artificial Intelligence Act (AIA) has been a hard fought, much contested piece of legislation. A few weeks ago, negotiators finally agreed a provisional deal that will see legislation from 2025 onwards covering AI in systems like facial recognition, surveillance, chatbots and so on. The EC’s 2018 ‘Strategy on Artificial Intelligence’ said its vision was “European ecosystem of excellence and trust”. However, much of the development of the AI Act has related to the role and actions of government, state agencies and corporations. Accordingly, our work addresses a gap by examining thinking on AI and civil society.

The role of civil society and the impact of AI

We carried out discourse analysis of 400 position papers submitted by government, public agencies, NGOs and businesses to the open public consultation on the European Commission’s Artificial Intelligence White Paper. We wanted to examine what they said about the role of civil society and the impact of A.I.

Our analysis identified several key themes. Foremost was the critical, watchdog role of civil society. In short, civil society input is needed for democratic, regulatory oversight and accountability of AI For example, one respondent asserted that to “Ensure democratic oversight and systems of accountability… the EU must integrate mechanisms for genuine oversight and consultation, with civil society organisations and communities most likely to experience the deleterious effects… There must also be accessible systems of accountability…”

A further major theme was civil society’s role in safeguarding fundamental, human rights. As this response argued, there should be… “effective channels of communication with local civil society groups and researchers [and they] should conduct Human Rights Impact Assessments through the life cycle of their A.I. systems”.

Allied to rights, a further core strand in the consultation responses underlined the need for civil society to play a strong role in promoting equality of opportunity. Gender equality was repeatedly emphasised, notably civil society’s role in applying gender mainstreaming to AI policies (in line with Amsterdam Treaty requirements), as well as the use of gender budgeting and robust measures to address AI and violence and abuse against women. Gender imbalance in the workforces developing AI technologies and their regulation was a further worry.

Another core theme in the consultation responses centred on trust. Namely, that civil society involvement in AI use and regulation is essential to building trust. In particular, civil society involvement is essential to regulatory legitimacy founded on accountability and transparency. For example, one respondent said “review of intelligent systems… [should include] representations from interested persons and groups in civil society and, to the fullest extent possible… to provide transparency … [this is integral to…] creating trust and providing control for European citizens…”

Implications of the study findings?

In conceptual terms they tell us we must broaden our thinking around the concept of “civil society” to incorporate non-human intelligence and its impact on associative life. Two areas stand out from the analysis – 1. the increasing influence of AI on civil society’s democratic role and holding governments and corporations to account, and 2. The potential threat it poses to fundamental rights and freedoms. The civil society responses analysed in our research are clear that the EU’s regulatory framework is a key challenge. Moreover, the stakes are high and human rights need to be safeguarded in the face of this technological shift.

Our work shows civil society viewpoints relate to shifting power relations – notably between citizens and civil society on one hand and the state and corporations on the other. Their responses are explicit in underlining how this is in large measure about AI’s impact on trust– both institutional and inter-personal – and the need for accountability and transparency in the new EU regulatory framework.

The message from our analysis is that civil society needs to be at the heart of shaping the regulation of AI use in Europe and beyond. In this regard, lessons need to be learned. Our analysis suggests that to date civil society could have/ should have had a stronger role in the development of the EC regulatory framework – their testimony alludes to feelings of marginalisation in the process so far.


[1] Richard Dobbs, James Manyika, Jonathan Woetzel, ‘The four global forces breaking all the trends’, McKinsey Global Institute (April 2015) – cited in House of Commons Science and Technology Committee Robotics and artificial intelligence, Fifth Report of Session 2016–17, p.12.


Image credit: kemalbas via iStock.