Don’t assume technology is racially neutral
Without adequate and effective safeguards, the increasing reliance on technology in law enforcement risks reinforcing existing prejudices against racialised communities, writes Karen Taylor.https://www.theparliamentmagazine.eu/sites/www.theparliamentmagazine.eu/files/styles/original_-_local_copy/entityshare/33329%3Fitok%3DXJY_Dja6#.jpg
Within the European Union, police and law enforcement are increasingly using new technologies to support their work. Yet little consideration is given to the potential misuse of these technologies and their impact on racialised communities.
When the everyday experience of racialised policing and ethnic proﬁling is already causing signiﬁcant physical, emotional and social harm, how much will these new developments further harm people of colour in Europe?
With racialised communities already over-policed and under-protected, resorting to data-driven policing may further entrench existing discriminatory practices, such as racial proﬁling and the construction of ‘suspicious’ communities.
This was highlighted in a new report published by the European Network Against Racism (ENAR) and the Open Society Justice Initiative.
Using systems to proﬁle, survey and provide a logic for discrimination is not new; what is new is the sense of neutrality afforded to data-driven policing.
The ENAR report shows that law enforcement agencies present technology as ‘race’ neutral and independent of bias. However, such claims overlook the evidence of discriminatory policing against racialised minority and migrant communities throughout Europe.
European criminal justice systems police minority groups according to the myths and stereotypes about the level of ‘risk’ they pose rather than the reality.
This means racialised communities will feel a disproportionate impact from new technologies used for identiﬁcation, surveillance and analysis – such as crime analytics, the use of mobile ﬁngerprinting scanners, social media monitoring and mobile phone extraction - as they are already overpoliced.
For example, in the UK, social media is used to track ‘gang-associated individuals’ within the ‘Gangs Matrix’. If a person shares content on social media that references a gang name or certain colours, ﬂags or attire linked to a gang, they may be added to this database, according to research by Amnesty International.
Given the racialisation of gangs, it is likely that such technology will be deployed for use against racialised people and groups.
Another technology, automatic number plate recognition (ANPR) cameras, leads to concerns that cars can be ‘marked’, leading to increased stop and search.
The Brandenburg police in Germany used the example of looking for “motorhomes or caravans with Polish license plates” in a recent leaked internal evaluation of the system.
Searching for license plates of a particular nationality and looking for ‘motorhomes or caravans’ suggests a discriminatory focus on Travellers or Roma.
Similarly, mobile ﬁngerprint technology enables police to check against existing databases (including immigration records); and disproportionately affects racialised communities, given the racial disparity of those stopped and searched.
Another way in which new technology negatively impacts racialised communities is that many algorithmically-driven identiﬁcation technologies, such as automated facial recognition, disproportionately mis-identify people from black and other minority ethnic groups – and, in particular, black and brown women.
This means that police are more likely to wrongfully stop, question and potentially arrest them.
Finally, predictive policing systems are likely to present geographic areas and communities with a high proportion of minority ethnic people as ‘risky’ and subsequently make them a focus for police attention.
Research shows that data-driven technologies that inform predictive policing increased levels of arrest for racialised communities by 30 percent. Indeed, place-based predictive tools take data from police records generated by over-policing certain communities.
Forecasting is based on the higher rates of police intervention in those areas, suggesting police should further prioritise those areas.
We often – rightly – discuss the ethical implications of new technologies and the current lack of public scrutiny and accountability. Yet we also urgently need to consider how they affect and target racialised communities.
The European Commission will present a proposal on Artiﬁcial Intelligence within 100 days of taking office. This is an opportunity for the European Parliament to put safeguards in place that ensure that the use of AI does not have any harmful and/or discriminatory impact.
In particular, it is important to consider how the use of such technologies will impact racialised communities, so often overlooked in these discussions. MEPs should also ensure that any data-driven technologies are not designed or used in a way that targets racialised communities.
The use of such data has wide-ranging implications for racialised communities, not just in policing but also in counterterrorism and immigration control.
Governments and policymakers need to develop processes for holding law enforcement agencies and technology companies to account for the consequences and effects of technology-driven policing.
This should include implementing safeguards to ensure such technologies do not target racialised as well as other already over-policed communities.
Technology is not neutral or objective; unless safeguards are put in place, it will exacerbate racial, ethnic and religious disparities in European justice systems.
#neutralité #technologie #discriminations #racisme #xénophobie #police #profilage_ethnique #profilage #données #risques #surveillance #identification #big-data #smartphone #réseaux_sociaux #Gangs_Matrix #automatic_number_plate_recognition (#ANPR) #Système_de_reconnaissance_automatique_des_plaques_minéralogiques #plaque_d'immatriculation #Roms #algorythmes #contrôles_policiers
Pour télécharger le rapport :