Considering that algorithms and other related AI products are growing among public and private services, improving the literacy of digital inclusion workers and rights advocates ensures that these systems are understood and that professionals can help citizens to act on problems with these systems. By identifying and fostering what competencies are needed to understand and use algorithms, the project accompanies the digital transformation of society without leaving citizens behind.
Problematically, citizens who are most at risk of being targeted or discriminated against through the use of algorithms are likely to lack information and options for action the most. This scissor effect has been highlighted by many activists In France, Lighthouse Reports partnered with Le Monde to investigate an algorithm deployed by France’s Caisse Nationale des Allocations Familiales (CNAF), the agency responsible for the French social security system. The algorithm, deployed for more than 10 years, attempts to predict which benefit recipients are committing fraud. This system systematically targets people in the most precarious situations while making it nearly impossible for the better off to be investigated. Similarly, The Dutch government has been using SyRI, a secret algorithm, to detect possible social welfare fraud. Civil rights activists have taken the matter to court and managed to get public organizations to think about less repressive alternatives. Researchers in Belgium are worried that similar tools are or were in use. Many other examples of damaging algorithms exist in other EU countries and have been documented by the berlin-based NGO Algorithm Watch.
By improving the understanding of algorithms to inclusion workers, they will be more able to critique and resist the new coming AI products — especially the ones coming through the distribution of Generative AI within search engines or chatbots — that are often imposed on citizens without a democratic oversight.
The project will anticipate and respond to an increasing demand on digital inclusion workers to explain and assist citizens dealing with algorithm-based decisions (eg. when one has to fill tax documents or write a prompt in a Gen AI chatbot). By working with professionals, we will identify which algorithms need to be explained and how. Working together through engagement workshops and other forms of participatory initiatives will reinforce their literacy and a sense of community.
Since they become more and more key intermediaries of information, inquiring and unpacking algorithms will be an opportunity to question values and policies at the heart of our relations with public bodies and big corporate platforms. The project will then help to structure and strengthen European cooperation and common values on digital inclusion by a sharing and transfer of best practices across borders. Ultimately, it promotes civic engagement at its core: the right to information for EU citizens — promoted in EU regulations such as the AI Act and Digital Service Act — including in the increasing number of instances where this information concerns decisions based on algorithms.
We face algorithms every day: as part of automated calculation of government subsidies; through facial recognition in streets; the calculation of an insurance policy, or a chatbot helping us to navigate the maze of consumer policies. Nevertheless, algorithms remain difficult to understand. There is a digital divide in algorithmic transparency: only experts have the time and skills to understand algorithms. The challenge of algorithmic transparency is combined with the challenge of digital inclusion: how to understand, or even challenge, a decision motivated by an algorithmic process, without the skills necessary to navigate the digital environment, or even without access to a computer or an internet connection?
Understanding algorithms is a challenge to all: as is clear in the case of Public Digital Spaces in Belgium, users of digital inclusion services come from all walks of life, social and age categories. In France, a study by the Data Publica Observatory stated that “five years after the entry into force of the law for a digital republic, the transparency of public algorithms is implemented anecdotally” and highlights among the obstacles “the difficult apprehension of a technical subject by elected officials and public sector agents”. A national citizen panel on pressing research needs on AI, conducted by Waag Futurelab and commissioned by the Dutch research funder NWO, found that citizens often do not understand the components and workings of algorithms and call for more research on how to increase AI transparency. The needs are therefore widespread.
With a lack of algorithmic transparency and insufficient understanding of this topic by European citizens, it is the question of the effective exercise of citizenship and civic engagement that is addressed.
Algorithmic transparency is not just about making code available. Close to 20 years of open data policies and data literacy initiatives have shown that transparency is only theoretical without another crucial step - mediation - in the sense of communication, explanation and the creation of communities of practices dedicated to making algorithms knowable.
The ALGO-LIT project aims to map, connect, train and equip digital inclusion workers and other similar practitioners on the topic of algorithm transparency in France, Belgium and the Netherlands. Building on previous work, our action-research methodology will document the needs and practices of digital inclusion workers in the field of algorithmic literacy and transparency ; share practices and train these workers ; co-construct and adapt tools with the community ; promote and institutionalise skills in the field of algorithmic literacy and transparency across the EU.
Datactivist (coordinator, FR)
Datactivist is a French cooperative and participatory company founded in 2016 whose mission is to open up data and make them useful and usable.
Loup Cellard — Lead Researcher, loup [at] datactivist [.] coop
Maëlle Fouquenet — Senior Researcher, maelle [at] datactivist [.] coop
Margaux Larre-Perrez and Stéphanie Trichard — Admin. finance, margaux [at] datactivist [.] coop | stephanie [at] datactivist [.] coop
La Mednum (partner, FR)
Since 2017, La Mednum has been bringing players together, energising and networking French digital inclusion workers to enable them to find a framework for cooperation that will be best for them.
Quitterie de Marignan — Project Manager, quitterie [.] demarignan [at] lamednum [.] coop
Fari — AI for the Common Good Institute (partner, BE)
FARI is a non-profit university institute on AI, data and robotics focused on the Common Good and jointly initiated by two Brussels universities (VUB & ULB).
Alice Demaret, AI Impact Advisor, alice [.] demaret [at] fari [.] brussels
Léa Rogliano — Head of Fari's Citizen Engagement Hub lea [.] rogliano [at] fari [.] brussels
Carl-Maria Mörch, Co-director, FARI – AI for the Common Good Institute, Université Libre de Bruxelles (ULB), Carl [.] Morch [at] fari [.] brussels
Waag Futurelab (partner, NL)
Waag reinforces critical reflection on technology, develops technological and social design skills, and encourages social innovation. It doings so it contributes to the research, design and development of a sustainable, just society.
Danny Lämmerhirt — Senior Researcher, danny [at] waag [.] org
Tessel van Leeuwen — Researcher, tessel [at] waag [.] org
Bente Zwankhuizen — Project Manager, bente [at] waag [.] org
The project is co-funded for three years by an EU Erasmus + grant "cooperative partnership" (Dec 2024-Dec 2027).
Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the French Erasmus + Agency. Neither the European Union nor the granting authority can be held responsible for them.