• With drones and thermal cameras, Greek officials monitor refugees

    Athens says a new surveillance system will boost security, but critics raise alarm over its implications for privacy.

    “Let’s go see something that looks really nice,” says Anastasios Salis, head of information and communications technology at the Greek Migration and Asylum Ministry in Athens, before entering an airtight room sealed behind two interlocking doors, accessible only with an ID card and fingerprint scan.

    Beyond these doors is the ministry’s newly-installed centralised surveillance room.

    The front wall is covered by a vast screen. More than a dozen rectangles and squares display footage from three refugee camps already connected to the system.

    Some show a basketball court in a refugee camp on the island of Samos. Another screen shows the playground and another the inside of one of the containers where people socialise.

    Overhead, lights suddenly flash red. A potential threat has been detected in one of the camps. This “threat” has been flagged by Centaur, a high-tech security system the Greek Migration Ministry is piloting and rolling out at all of the nearly 40 refugee camps in the country.

    Centaur includes cameras and motion sensors. It uses algorithms to automatically predict and flag threats such as the presence of guns, unauthorised vehicles, or unusual visits into restricted areas.

    The system subsequently alerts the appropriate authorities, such as the police, fire brigade, and private security working in the camps.

    From the control room, operators deploy camera-equipped drones and instruct officers stationed at the camp to rush to the location of the reported threat.

    Officers carry smartphones loaded with software that allows them to communicate with the control centre.

    Once they determine the nature and severity of the threat, the control room guides them on the ground to resolve the incident.

    Video footage and other data collected as part of the operation can then be stored under an “incident card” in the system.

    This particular incident is merely a simulation, presented to Al Jazeera during an exclusive tour and preview of the Centaur system.

    The aim of the programme, according to Greek officials, is to ensure the safety of those who live inside the camps and in surrounding communities.

    “We use technology to prevent violence, to prevent events like we had in Moria – the arson of the camp. Because safety is critical for everyone,” Greek Migration Minister Notis Mitarachi told Al Jazeera at the November inauguration of a new, EU-funded “closed-controlled” refugee camp on Kos island, one of the first facilities to be connected to the Centaur system.

    ‘Dystopian’ surveillance project

    Nearly 40 cameras are being installed in each camp, which can be operated from the control room.

    There will also be thermal cameras, drones, and other technology – including augmented reality glasses, which will be distributed to police and private security personnel.

    “This was not to monitor and invade the privacy of the people [in the camps],” said Salis, one of the architects of Centaur. “You’re not monitoring them. You’re trying to prevent bad things from happening.”

    Greek authorities headline this new surveillance as a form of security but civil society groups and European lawmakers have criticised the move.

    “This fits a broader trend of the EU pouring public money into dystopian and experimental surveillance projects, which treat human beings as lab rats,” Ella Jakubowska, policy and campaigns officer at European Digital Rights (EDRi), told Al Jazeera. “Money which could be used to help people is instead used to punish them, all while the surveillance industry makes vast profits selling false promises of magical technology that claims to fix complex structural issues.”

    Recent reporting, which revealed Centaur will be partly financed by the EU COVID Recovery fund, has led a group of European lawmakers to write to the European Commission with their concerns about its implementation.

    Homo Digitalis, a Greek digital rights advocacy group, and EDRi said they made several requests for information on what data protection assessments were carried out before the development and deployment of Centaur.

    Such analysis is required under the EU’s General Data Protection Regulation (GDPR). They have also asked what data will be collected and how long it will be held by authorities. Those requests, they said, have gone unanswered.

    The Greek Migration Ministry did not respond to Al Jazeera’s query on whether an impact assessment was completed, and on policies regarding data retention and the processing of data related to children.

    In Samos, mixed feelings

    Advocates in Samos told Al Jazeera they raised concerns about camp residents being adequately notified about the presence of these technologies.

    But Salis, at the control centre, said this has been achieved through “signs – a lot of signs”, in the camps.

    The system does not currently incorporate facial recognition technology, at least “not yet”, according to Leonidas Petavrakis, a digital software specialist with ESA Security Solutions S.A., one of the companies contracted for the Centaur project.

    The potential use of facial recognition in this context is “a big concern”, said Konstantinos Kakavoulis of Homo Digitalis.

    Facial recognition systems often misidentify people of colour and can lead to wrongful arrests and convictions, according to studies. Human rights organisations globally have called for their use to be limited or banned.

    An EU proposal on regulating artificial intelligence, unveiled by the European Commission in April, does not go far enough to prevent the misuse of AI systems, critics claim.

    For some of those living under the glare of this EU-funded surveillance system, the feeling is mixed.

    Mohammed, a 25-year-old refugee from Palestine living in the new Samos camp, said that he did not always mind the cameras as he thought they might prevent fights, which broke out frequently at the former Samos camp.

    “Sometimes it’s [a] good feeling because it makes you feel safe, sometimes not,” he said but added that the sense of security came at a price.

    “There’s not a lot of difference between this camp and a prison.”

    https://www.aljazeera.com/news/2021/12/24/greece-pilots-high-tech-surveillance-system-in-refugee-camps
    #Grèce #réfugiés #asile #migrations #surveillance #complexe_militaro-industriel #drones #caméras_thérmiques #Samos #îles #camps_de_réfugiés #Centaur #algorythme #salle_de_contrôle #menace #technologie #EU_COVID_Recovery_fund #reconnaissance_faciale #intelligence_artificielle #AI #IA

    –—

    sur ces nouveaux camps de réfugiés fermés (et surveillés) dans les #îles grecques notamment :
    https://seenthis.net/messages/917173

    ping @etraces

  • Tecnologie per il controllo delle frontiere in Italia: identificazione, riconoscimento facciale e finanziamenti europei


    Executive summary

    L’utilizzo documentato di un sistema di riconoscimento facciale da parte dell’amministrazione comunale di Como o da parte della Polizia di Stato italiana ha aperto anche in Italia il dibattito su una tecnologia che all’estero, già da tempo, si critica per la sua inaccuratezza algoritmica e per la poca trasparenza. Un tema sicuramente preoccupante per tutti ma che certamente assume caratteristiche ancor più pericolose quando interessa gruppi o individui particolarmente vulnerabili come migranti, rifugiati e richiedenti asilo. In questo caso i dati e le informazioni sono processati da parte di agenzie governative a fini di sorveglianza e di controllo, con tutele assai minori rispetto ai cittadini europei ed italiani. Ciò comporta un grande rischio per queste persone poiché le procedure di identificazione al loro arrivo in Italia, effettuate all’interno degli hotspot, rischiano di essere un’arma a doppio taglio per la loro permanenza nel nostro Paese (o in Europa), determinando uno stato di sorveglianza continuativa a causa della loro condizione. Ancora una volta alcune categorie di persone sono costrette ad essere “banco di prova” per la sperimentazione di dispositivi di controllo e sorveglianza, a dimostrazione che esistono e si reiterano rapporti di potere anche attraverso la tecnologia, portando alla creazione di due categorie distinte: chi sorveglia e chi è sorvegliato.

    Da questa ricerca emerge che le procedure di identificazione e categorizzazione dei migranti, rifugiati o richiedenti asilo fanno ampio utilizzo di dati biometrici—la polizia italiana raccoglie sia le impronte digitali che la foto del loro volto—ma non è sempre facile comprendere in che modo vengano applicate. Nel momento in cui viene effettuata l’identificazione, le categorie sopra citate hanno ben poche possibilità di conoscere appieno il percorso che faranno i loro dati personali e biometrici, nonché di opporsi al peso che poi questo flusso di informazioni avrà sulla loro condizione in Italia e in tutta l’Unione Europea. Quest’ultima, infatti, promuove da alcuni anni la necessità di favorire l’identificazione dei migranti, stranieri e richiedenti asilo attraverso un massiccio utilizzo di tecnologie: a partire dal mare, pattugliato con navi e velivoli a pilotaggio remoto che “scannerizzano” i migranti in arrivo; fino all’approdo sulla terraferma, dove oltre all’imposizione dell’identificazione e del fotosegnalamento i migranti hanno rischiato di vedersi puntata addosso una videocamera “intelligente”.

    Ampio spazio è lasciato alla trattazione di come lo stato italiano utilizzi la tecnologia del riconoscimento facciale già da alcuni anni, senza che organizzazioni indipendenti o professionisti possano controllare il suo operato. Oltre alla mancata trasparenza degli algoritmi che lo fanno funzionare, infatti, non sono disponibili informazioni chiare sul numero di persone effettivamente comprese all’interno del database che viene utilizzato proprio per realizzare le corrispondenze tra volti, AFIS (acronimo di Automated Fingerprint Identification System).

    Nelle intenzioni della polizia italiana, infatti, c’era l’impiego di un sistema di riconoscimento facciale, SARI Real-Time, per riconoscere in tempo reale l’identità delle persone a bordo di un’imbarcazione durante le fasi di sbarco sulle coste italiane. Il sistema SARI Real-Time, acquistato originariamente per l’utilizzo durante manifestazioni ed eventi pubblici, è stato reso inutilizzabile a seguito della pronuncia negativa del Garante della Privacy: rischierebbe di introdurre una sorveglianza di massa ingiustificata. La decisione del Garante tutela quindi non solo coloro che vivono nel nostro paese ma anche chi, in una situazione di estrema vulnerabilità, arriva sulle nostre coste dopo un viaggio interminabile e si vede sottoposto a un controllo sproporzionato ancor prima di ricevere supporto medico e valutazione dello status legale.

    Come Centro Hermes per la Trasparenza e i Diritti Umani Digitali dal 2011 ci interroghiamo sul funzionamento e sullo scopo delle innovazioni in campo tecnologico, analizzandole non solo da un punto di vista tecnico ma anche attraverso la lente dei diritti umani digitali. Negli ultimi anni la datificazione della società attraverso la raccolta indiscriminata di dati personali e l’estrazione di informazioni (e di valore) relative al comportamento e alle attività svolte da ognuno di noi sono il tema centrale di ricerca, analisi e advocacy dell’associazione. Siamo convinti infatti che vada messa in dubbio non solo la tecnologia digitale creata al presunto scopo di favorire il progresso o di dare una risposta oggettiva a fenomeni sociali complessi, ma anche il concetto di tecnologia come neutra e con pressoché simili ripercussioni su tutti gli individui della società. È importante a nostro parere che qualunque discorso sulla tecnologia racchiuda in sé una più ampia riflessione politica e sociologica, che cerchi di cogliere la differenza tra chi agisce la tecnologia e chi la subisce.

    Principali risultati:

    https://protecht.hermescenter.org
    #rapport #Hermes #frontières #Italie #reconnaissance_faciale #réfugiés #asile #migrations #contrôles_frontaliers #identification #financements_européens #technologie #complexe_militaro-industriel #Côme #surveillance #biométrie #données_biométriques #catégorisation #photos #empreintes_digitales #AFIS #algorythmes #Automated_Fingerprint_Identification_System #SARI_Real-Time #database #base_de_données

    sur la mise en place de reconnaissance faciale à Côme:
    https://seenthis.net/messages/859963

    ping @etraces

  • Don’t assume technology is racially neutral

    Without adequate and effective safeguards, the increasing reliance on technology in law enforcement risks reinforcing existing prejudices against racialised communities, writes Karen Taylor.

    Within the European Union, police and law enforcement are increasingly using new technologies to support their work. Yet little consideration is given to the potential misuse of these technologies and their impact on racialised communities.

    When the everyday experience of racialised policing and ethnic profiling is already causing significant physical, emotional and social harm, how much will these new developments further harm people of colour in Europe?

    With racialised communities already over-policed and under-protected, resorting to data-driven policing may further entrench existing discriminatory practices, such as racial profiling and the construction of ‘suspicious’ communities.

    This was highlighted in a new report published by the European Network Against Racism (ENAR) and the Open Society Justice Initiative.

    Using systems to profile, survey and provide a logic for discrimination is not new; what is new is the sense of neutrality afforded to data-driven policing.

    The ENAR report shows that law enforcement agencies present technology as ‘race’ neutral and independent of bias. However, such claims overlook the evidence of discriminatory policing against racialised minority and migrant communities throughout Europe.

    European criminal justice systems police minority groups according to the myths and stereotypes about the level of ‘risk’ they pose rather than the reality.

    This means racialised communities will feel a disproportionate impact from new technologies used for identification, surveillance and analysis – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction - as they are already overpoliced.

    For example, in the UK, social media is used to track ‘gang-associated individuals’ within the ‘Gangs Matrix’. If a person shares content on social media that references a gang name or certain colours, flags or attire linked to a gang, they may be added to this database, according to research by Amnesty International.

    Given the racialisation of gangs, it is likely that such technology will be deployed for use against racialised people and groups.

    Another technology, automatic number plate recognition (ANPR) cameras, leads to concerns that cars can be ‘marked’, leading to increased stop and search.

    The Brandenburg police in Germany used the example of looking for “motorhomes or caravans with Polish license plates” in a recent leaked internal evaluation of the system.

    Searching for license plates of a particular nationality and looking for ‘motorhomes or caravans’ suggests a discriminatory focus on Travellers or Roma.

    Similarly, mobile fingerprint technology enables police to check against existing databases (including immigration records); and disproportionately affects racialised communities, given the racial disparity of those stopped and searched.

    Another way in which new technology negatively impacts racialised communities is that many algorithmically-driven identification technologies, such as automated facial recognition, disproportionately mis-identify people from black and other minority ethnic groups – and, in particular, black and brown women.

    This means that police are more likely to wrongfully stop, question and potentially arrest them.

    Finally, predictive policing systems are likely to present geographic areas and communities with a high proportion of minority ethnic people as ‘risky’ and subsequently make them a focus for police attention.

    Research shows that data-driven technologies that inform predictive policing increased levels of arrest for racialised communities by 30 percent. Indeed, place-based predictive tools take data from police records generated by over-policing certain communities.

    Forecasting is based on the higher rates of police intervention in those areas, suggesting police should further prioritise those areas.

    We often – rightly – discuss the ethical implications of new technologies and the current lack of public scrutiny and accountability. Yet we also urgently need to consider how they affect and target racialised communities.

    The European Commission will present a proposal on Artificial Intelligence within 100 days of taking office. This is an opportunity for the European Parliament to put safeguards in place that ensure that the use of AI does not have any harmful and/or discriminatory impact.

    In particular, it is important to consider how the use of such technologies will impact racialised communities, so often overlooked in these discussions. MEPs should also ensure that any data-driven technologies are not designed or used in a way that targets racialised communities.

    The use of such data has wide-ranging implications for racialised communities, not just in policing but also in counterterrorism and immigration control.

    Governments and policymakers need to develop processes for holding law enforcement agencies and technology companies to account for the consequences and effects of technology-driven policing.

    This should include implementing safeguards to ensure such technologies do not target racialised as well as other already over-policed communities.

    Technology is not neutral or objective; unless safeguards are put in place, it will exacerbate racial, ethnic and religious disparities in European justice systems.

    https://www.theparliamentmagazine.eu/articles/opinion/don%E2%80%99t-assume-technology-racially-neutral

    #neutralité #technologie #discriminations #racisme #xénophobie #police #profilage_ethnique #profilage #données #risques #surveillance #identification #big-data #smartphone #réseaux_sociaux #Gangs_Matrix #automatic_number_plate_recognition (#ANPR) #Système_de_reconnaissance_automatique_des_plaques_minéralogiques #plaque_d'immatriculation #Roms #algorythmes #contrôles_policiers

    –--------

    Pour télécharger le rapport :


    https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

    ping @cede @karine4 @isskein @etraces @davduf

  • THE TERMINATOR STUDIES
    http://terminatorstudies.org

    Jobocalypse: The End of Human Jobs and How Robots will Replace Them

    Jobocalypse is a look at the rapidly changing face of robotics and how it will revolutionize employment and jobs over the next thirty years. Ben Way lays out the arguments in favor of and against the mechanization of our society, as well as the amazing advantages and untold risks, as we march into this ever-present future.

    #robot #terminator #algorythme #art

  • MUTE VOL. 3, NO. 4 - SLAVE TO THE ALGORITHM
    http://www.metamute.org/editorial/magazine/mute-vol.-3-no.-4-slave-to-algorithm
    As the financial crisis fastens its grip ever tighter around the means of human and natural survival, the age of the algorithm has hit full stride. This phase-shift has been a long time coming of course, and was undoubtedly as much a cause of the crisis as its effect, with self-propelling algorithmic power replacing human labour and judgement and creating event fields far below the threshold of human perception and responsiveness.
    #trading #finance #Algorythmes #haute_frequence #alternatives #revue