Action Coalition on Civic Engagement in AI Design

Contact

Francesca Fanucci, Senior Legal Advisor ECNL

Objectives

ECNL initiated the Action Coalition on Civic Engagement in Artificial Intelligence (AI) Design as a
community of actors that are passionate about amplifying civic voices in AI development and
design, safeguarding human rights and freedoms and promoting rights-based AI that benefit
individuals and communities. The Action Coalition is building upon the work of the ECNL-lead
MozFest Work Group Harnessing the civic voice in AI impact assessment that aims to develop
guidance for meaningful participation of external stakeholders, such as civil society organizations
and affected communities, in Human Rights Impact Assessments (HRIA) of AI systems. These
assessments should be a requirement for any AI system deployed in communities and are
embedded in the Human Rights Due Diligence framework.

Our work to date has identified 3
recurring challenges which impede action:

• The need to improve the the outreach to and quality of stakeholder engagement in the context
of human rights impact assessment related to the AI design, development and deployment,
with emphasis on groups most at-risk from use of technology.

• The lack of detailed understanding of the technologies among stakeholders, particularly
about their design, functionality, capability, and limits. Requiring knowledge transfer and
capacity building of diverse stakeholders who can offer insight about human rights realities
for users and impacted groups around the world.

• Inadequate the timing, quality and impact of stakeholder engagement to empower genuine
collaborative problem solving resulting in practical results in more human-focused,
sustainable, design and development.

This Action Coalition brings together actors who are concerned with or already work on civic
engagement and see an opportunity and benefit of sharing existing actions, identify synergies and
complementarity, and to align advocacy efforts. This will be achieved by:

i. collectively identifying challenges, obstacles, and resources gaps for meaningful
engagement;

ii. discussing pathways and potential solutions aimed at overcoming the identified obstacles
and gaps, and

iii. producing public outputs and a guidance framework on what it is to understand
meaningful engagement between AI developers and stakeholders to the benefit of the
larger ‘responsible AI’ community.

This Action Coalition will coordinate its activities with and ensure broad civil society feedback
into the Action Coalition on Responsible Technology

Activities

During the 2022 year of action, the Action Coalition will conduct a series of activities that support
the objective. Practical output of the activities includes developing a guidance framework for
inclusion of public, especially civil society and impacted communities, in human rights impact
assessment for AI, as a models for inclusive development of technology.

I. Action Workshops
Action Coalition will host thematic workshops related to the development of guidance framework
and knowledge gaps identified, focusing on the lessons learned from the ongoing work and in
coordination with other Action Coalitions as well as individual Action Coalition members. The
workshops will bring together stakeholders with an interest in the theme in question.

II. Guidance Framework // draft
Action Coalition will develop a draft guidance framework for inclusion of relevant stakeholders,
namely, civil society and impacted communities, in human rights impact assessment for AI, as a
models for inclusive development of technology. The draft guidance framework will be
disseminated as a common starting point for further discussion and piloting. It will also feed into
the work of other relevant Action Coalitions and may be presented to the Biden Democracy
Summit, with support from the Danish MFA.

III. Piloting of Guidance Framework
Action Coalition will seek to pilot the draft Guidance Framework to develop a ‘good practice model
for inclusive innovations’ thorough design and tested for use on selected AI products and
processes for consultations. After testing, the guidance framework will be evaluated and further
developed for practical use.

IV. Public report
At the end of the year of action, the Action Coalition will develop a brief public report summarizing
the activities of the action coalition and its members.

Partners
  • European Center For Not-For-Profit Law Stichting - ECNL
  • Urvashi Aneja (Director, Digital Futures Lab)
  • Hilary Sutcliffe (Director, SocietyInside)
  • Mathew Mytka (Co-founder & Chief Vision Shaper, Tethix)
  • European Disability Forum
  • Prof Alessandro Mantelero (Associate Professor of Private Law and Law & Technology at the Polytechnic University of Turin – EC Jean Monnet Chair in Mediterranean Digital Societies and Law