• President of Mexico should veto the biometric mobile phone registry
    https://www.accessnow.org/mexicos-new-biometric-mobile-phone-registry

    Collecting personal biometric data in exchange for a mobile SIM card is unnecessary and dangerous. Yet, this is the stark reality the Mexican Chamber of Senators has set in motion after voting yesterday, April 13, in favor of establishing a National Register of Mobile Phone Users. Access Now and Red en Defensa de los Derechos Digitales (R3D) call on the President of Mexico, Andrés Manuel López Obrador, to veto the alarming new biometric mobile phone registry. “The president must be consistent (...)

    #smartphone #SIM #biométrie #criminalité #données #reconnaissance #empreintes #AccessNow

    ##criminalité

  • Clearview AI Is Taking Facial Recognition Privacy to the Supreme Court
    https://onezero.medium.com/clearview-ai-is-taking-facial-recognition-privacy-to-the-supreme-cou

    International regulators have found Clearview AI’s technology breaches their privacy laws Clearview AI plans to challenge an Illinois law guarding against private facial recognition databases in the Supreme Court, according to Bloomberg Law. The Illinois’ Biometric Information Privacy Act (BIPA) has been a thorn in the side of tech giants like Google, Facebook, and Apple for years, as it prohibits the collection of data like facial recognition images, fingerprints, and iris scans without (...)

    #Clearview #algorithme #biométrie #reconnaissance #iris #empreintes #surveillance #législation #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #BIPA (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##scraping

  • Despite Scanning Millions With Facial Recognition, Feds Caught Zero Imposters at Airports Last Year
    https://onezero.medium.com/despite-scanning-millions-of-faces-feds-caught-zero-imposters-at-air

    U.S. Customs and Border Protection scanned more than 23 million people in public places with facial recognition technology in 2020 U.S. Customs and Border Protection scanned more than 23 million people with facial recognition technology at airports, seaports, and pedestrian crossings in 2020, the agency recently revealed in its annual report on trade and travel. The agency scanned four million more people than in 2019. The report indicates that the system caught no imposters traveling (...)

    #CBP #algorithme #CCTV #biométrie #facial #fraude #reconnaissance #vidéo-surveillance #iris #empreintes #frontières #surveillance (...)

    ##_

  • Les scanners chinois utilisés par les douanes belges pointés du doigt : « Avec ça, ils peuvent infiltrer nos aéroports »
    https://www.lalibre.be/belgique/societe/les-scanners-chinois-utilises-par-les-douanes-belges-pointes-du-doigt-avec-c

    Les douanes belges utilisent des scanners fabriqués par une société chinoise pourtant interdits dans d’autres pays pour des raisons de sécurité. Si les douanes elles-mêmes n’y voient aucun problème, les critiques se multiplient, à en croire l’édition de vendredi du journal De Standaard. Les scanners fixes et mobiles en question, fabriqués par la société chinoise Nuctech, sont utilisés dans les ports, certaines gares et aéroports belges. Cependant, de plus en plus de questions se posent à propos de cette (...)

    #scanner #biométrie #données #empreintes #surveillance #

    ##_

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • The Facial Recognition Backlash Is Here
    https://onezero.medium.com/the-facial-recognition-backlash-15b5707444f3

    But will the current bans last ? The facial recognition industry has been quietly working alongside law enforcement, military organizations, and private companies for years, leveraging 40-year old partnerships originally centered around fingerprint databases. But in 2020, the industry faced an unexpected reckoning. February brought an explosive New York Times report on Clearview AI, a facial recognition company that had scraped billions of images from social media to create an (...)

    #Clearview #Microsoft #Walmart #IBM #Amazon #biométrie #police #racisme #facial #reconnaissance #discrimination #empreintes #surveillance #algorithme #CCTV #vidéo-surveillance #ACLU (...)

    ##FightfortheFuture

  • Tech Companies Are Pushing Back Against Biometric Privacy Laws
    https://www.bloomberg.com/news/articles/2017-07-20/tech-companies-are-pushing-back-against-biometric-privacy-laws

    They want your body. Privacy advocates cheered when Illinois passed its Biometric Information Privacy Act (BIPA) in 2008 regulating commercial use of finger, iris, and facial scans. With companies such as Facebook Inc. and Google Inc. developing facial tagging technology, it was clear that laws would be needed to ensure companies didn’t collect and use biometric data in ways that compromised an individual’s right to privacy. If you lose your credit card, it’s easily replaced. But what happens (...)

    #Walmart #Google #Apple #Amazon #Facebook #biométrie #consentement #données #facial #législation #reconnaissance #iris #empreintes #lobbying #surveillance (...)

    ##EFF

  • #Calais : #maraudes et #ratonnades

    Les récentes images de #violences_policières nous rappellent celles vues à Calais lors du démantèlement de la jungle, ou, plus récemment encore, contre les réfugiés encore sur place.

    Le 15 Janvier 2018, lors de la visite d’Emmanuel Macron à Calais, deux associations d’aides aux réfugiés ont porté plainte contre X pour « destruction et #dégradation_de_biens ». Condamnés à dormir dehors, les réfugiés sont victimes de violences policières dénoncées depuis des mois par les associations : un jeune érythréen de 16 ans a perdu son oeil lors d’une opération de police...

    Un nationaliste repenti, des bénévoles à bout de souffle, des réfugiés épuisés : ils témoignent d’une histoire qui se répète désespérément, en plus grave et en plus rude.

    Patricia, qui participe bénévolement aux maraudes de Salam, témoigne de son incompréhension face à la #haine que certains habitants de Calais expriment à l’égard de ces réfugiés. Critiques venant de gens qui, parfois, connaissent eux aussi de grandes difficultés et doivent faire face à une autre forme de #misère.

    Romuald avait dans un premier temps trouvé sa place dans une association « anti-migrant » fréquentée par la sphère de l’extrême droite.

    « Qu’on gère l’immigration, qu’on ferme les #frontières, je suis pour, mais de là à gazer un mec parce qu’il n’est pas de la même couleur de peau, il y a tout un monde. Romuald, aujourd’hui bénévole pour l’#association_Salam. »

    Il quitte ce groupe, en désaccord avec sa radicalité. Quelque temps plus tard, Patricia l’a incité à se rendre à une maraude, puis à rejoindre l’association Salam dont il est aujourd’hui un des membres actifs.

    « Pour qu’un calaisien puisse gagner sa vie, il a intérêt à investir dans les barbelés ou les clôtures. Ici c’est grillagé dans tous les coins. Romuald »

    Youssef, lui, est membre d’#Utopia_56, une association venant en aide aux migrants de Calais. Il raconte les #dispersions, les #gaz_lacrymogènes et les violences policières.

    « On n’est pas équipés pour faire la guerre. Eux, ils ont des armes. »

    https://www.franceculture.fr/emissions/les-pieds-sur-terre/calais-maraudes-et-ratonnades


    #asile #migrations #réfugiés #démantèlement #destruction #campement #audio #podcast #SDF #logement #hébergement #sans-abri #haine_des_réfugiés #extrême_droite #solidarité #violence #Salam #anti-migrants #islamophobie #fake_news #anti-musulmans #témoignage #distribution_de_repas

    –---

    Minute 25’10, témoignage d’un migrant, Abeba d’Ethiopie :
    « Je suis dubliné, je suis l’esclave de mes #empreintes »
    #empreintes_digitales

    ping @isskein @karine4

  • Here’s how a well-connected security company is quietly building mass biometric databases in West Africa with EU aid funds
    https://www.privacyinternational.org/news-analysis/4290/heres-how-well-connected-security-company-quietly-building-mas

    Documents disclosed to Privacy International reveal how the European Union has been using aid funds to finance the development of biometric identity systems in countries in Africa as part of its response to migration, and highlight urgent concerns. Key points The EU is using aid funds to build mass-scale and high-risk biometric identification systems to manage migration flows to Europe and to facilitate deportations ; Civipol, a well-connected French company owned by some of the (...)

    #Airbus #Safran #Thalès #biométrie #migration #militaire #données #facial #reconnaissance #iris #empreintes (...)

    ##PrivacyInternational

  • La #Technopolice, moteur de la « #sécurité_globale »

    L’article 24 de la #loi_Sécurité_Globale ne doit pas devenir l’arbre qui cache la forêt d’une politique de fond, au cœur de ce texte, visant à faire passer la #surveillance et le #contrôle_de_la_population par la police à une nouvelle ère technologique.

    Quelques jours avant le vote de la loi Sécurité Globale à l’Assemblée Nationale, le ministère de l’Intérieur présentait son #Livre_blanc. Ce long #rapport de #prospective révèle la #feuille_de_route du ministère de l’Intérieur pour les années à venir. Comme l’explique Gérard Darmanin devant les députés, la proposition de loi Sécurité Globale n’est que le début de la transposition du Livre dans la législation. Car cette loi, au-delà de l’interdiction de diffusion d’#images de la police (#article_24), vise surtout à renforcer considérablement les pouvoirs de surveillance des #forces_de_l’ordre, notamment à travers la légalisation des #drones (article 22), la diffusion en direct des #caméras_piétons au centre d’opération (article 21), les nouvelles prérogatives de la #police_municipale (article 20), la #vidéosurveillance dans les hall d’immeubles (article 20bis). Cette loi sera la première pierre d’un vaste chantier qui s’étalera sur plusieurs années.

    Toujours plus de pouvoirs pour la police

    Le Livre blanc du ministère de l’Intérieur envisage d’accroître, à tous les niveaux, les pouvoirs des différentes #forces_de_sécurité (la #Police_nationale, la police municipale, la #gendarmerie et les agents de #sécurité_privée) : ce qu’ils appellent, dans la novlangue officielle, le « #continuum_de_la_sécurité_intérieure ». Souhaitant « renforcer la police et la rendre plus efficace », le livre blanc se concentre sur quatre angles principaux :

    - Il ambitionne de (re)créer une #confiance de la population en ses forces de sécurité, notamment par une #communication_renforcée, pour « contribuer à [leur] légitimité », par un embrigadement de la jeunesse – le #Service_National_Universel, ou encore par la création de « #journées_de_cohésion_nationale » (page 61). Dans la loi Sécurité Globale, cette volonté s’est déjà illustrée par la possibilité pour les policiers de participer à la « #guerre_de_l’image » en publiant les vidéos prises à l’aide de leurs #caméras_portatives (article 21).
    - Il prévoit d’augmenter les compétences des #maires en terme de sécurité, notamment par un élargissement des compétences de la police municipale : un accès simplifié aux #fichiers_de_police, de nouvelles compétences en terme de lutte contre les #incivilités … (page 135). Cette partie-là est déjà en partie présente dans la loi Sécurité Globale (article 20).
    - Il pousse à une #professionnalisation de la sécurité privée qui deviendrait ainsi les petites mains de la police, en vu notamment des #Jeux_olympiques Paris 2024, où le besoin en sécurité privée s’annonce colossal. Et cela passe par l’augmentation de ses #compétences : extension de leur #armement, possibilité d’intervention sur la #voie_publique, pouvoir de visionner les caméras, et même le port d’un #uniforme_spécifique (page 145).
    - Enfin, le dernier grand axe de ce livre concerne l’intégration de #nouvelles_technologies dans l’arsenal policier. Le titre de cette partie est évocateur, il s’agit de « porter le Ministère de l’Intérieur à la #frontière_technologique » (la notion de #frontière évoque la conquête de l’Ouest aux États-Unis, où il fallait coloniser les terres et les premières nations — la reprise de ce vocable relève d’une esthétique coloniale et viriliste).

    Ce livre prévoit une multitude de projets plus délirants et effrayants les uns que les autres. Il propose une #analyse_automatisée des #réseaux_sociaux (page 221), des #gilets_connectés pour les forces de l’ordre (page 227), ou encore des lunettes ou #casques_augmentés (page 227). Enfin, le Livre blanc insiste sur l’importance de la #biométrie pour la police. Entre proposition d’#interconnexion des #fichiers_biométriques (#TAJ, #FNAEG, #FAED…) (page 256), d’utilisation des #empreintes_digitales comme outil d’#identification lors des #contrôles_d’identité et l’équipement des #tablettes des policiers et gendarmes (#NEO et #NEOGEND) de lecteur d’empreinte sans contact (page 258), de faire plus de recherche sur la #reconnaissance_vocale et d’#odeur (!) (page 260) ou enfin de presser le législateur pour pouvoir expérimenter la #reconnaissance_faciale dans l’#espace_public (page 263).

    Le basculement technologique de la #surveillance par drones

    Parmi les nouveaux dispositifs promus par le Livre blanc : les #drones_de_police, ici appelés « #drones_de_sécurité_intérieure ». S’ils étaient autorisés par la loi « Sécurité Globale », ils modifieraient radicalement les pouvoirs de la police en lui donnant une capacité de surveillance totale.

    Il est d’ailleurs particulièrement marquant de voir que les rapporteurs de la loi considèrent cette légalisation comme une simple étape sans conséquence, parlant ainsi en une phrase « d’autoriser les services de l’État concourant à la #sécurité_intérieure et à la #défense_nationale et les forces de sécurité civile à filmer par voie aérienne (…) ». Cela alors que, du côté de la police et des industriels, les drones représentent une révolution dans le domaine de la sécurité, un acteur privé de premier plan évoquant au sujet des drones leur « potentiel quasiment inépuisable », car « rapides, faciles à opérer, discrets » et « tout simplement parfaits pour des missions de surveillance »

    Dans les discours sécuritaires qui font la promotion de ces dispositifs, il est en effet frappant de voir la frustration sur les capacités « limitées » (selon eux) des caméras fixes et combien ils fantasment sur le « potentiel » de ces drones. C’est le cas du maire LR d’Asnières-sur-Seine qui en 2016 se plaignait qu’on ne puisse matériellement pas « doter chaque coin de rue de #vidéoprotection » et que les drones « sont les outils techniques les plus adaptés » pour pallier aux limites de la présence humaine. La police met ainsi elle-même en avant la toute-puissance du #robot par le fait, par exemple pour les #contrôles_routiers, que « la caméra du drone détecte chaque infraction », que « les agents démontrent que plus rien ne leur échappe ». Même chose pour la #discrétion de ces outils qui peuvent, « à un coût nettement moindre » qu’un hélicoptère, « opérer des surveillances plus loin sur l’horizon sans être positionné à la verticale au-dessus des suspects ». Du côté des constructeurs, on vante les « #zooms puissants », les « #caméras_thermiques », leur donnant une « #vision_d’aigle », ainsi que « le #décollage possible pratiquement de n’importe où ».

    Tout cela n’est pas que du fantasme. Selon un rapport de l’Assemblée nationale, la police avait, en 2019, par exemple 30 drones « de type #Phantom_4 » et « #Mavic_Pro » (ou « #Mavic_2_Enterprise » comme nous l’avons appris lors de notre contentieux contre la préfecture de police de Paris). Il suffit d’aller voir les fiches descriptives du constructeur pour être inondé de termes techniques vantant l’omniscience de son produit : « caméra de nacelle à 3 axes », « vidéos 4K », « photos de 12 mégapixels », « caméra thermique infrarouge », « vitesse de vol maximale à 72 km/h » … Tant de termes qui recoupent les descriptions faites par leurs promoteurs : une machine volante, discrète, avec une capacité de surveiller tout (espace public ou non), et de loin.

    Il ne s’agit donc pas d’améliorer le dispositif de la vidéosurveillance déjà existant, mais d’un passage à l’échelle qui transforme sa nature, engageant une surveillance massive et largement invisible de l’espace public. Et cela bien loin du léger cadre qu’on avait réussi à imposer aux caméras fixes, qui imposait notamment que chaque caméra installée puisse faire la preuve de son utilité et de son intérêt, c’est-à-dire de la nécessité et de la #proportionnalité de son installation. Au lieu de cela, la vidéosurveillance demeure une politique publique dispendieuse et pourtant jamais évaluée. Comme le rappelle un récent rapport de la Cour des comptes, « aucune corrélation globale n’a été relevée entre l’existence de dispositifs de vidéoprotection et le niveau de la délinquance commise sur la voie publique, ou encore les taux d’élucidation ». Autre principe fondamental du droit entourant actuellement la vidéosurveillance (et lui aussi déjà largement inappliqué) : chaque personne filmée doit être informée de cette surveillance. Les drones semblent en contradiction avec ces deux principes : leur utilisation s’oppose à toute notion d’information des personnes et de nécessité ou proportionnalité.

    Où serons-nous dans 4 ans ?

    En pratique, c’est un basculement total des #pratiques_policières (et donc de notre quotidien) que préparent ces évolutions technologiques et législatives. Le Livre blanc fixe une échéance importante à cet égard : « les Jeux olympiques et paralympiques de Paris de 2024 seront un événement aux dimensions hors normes posant des enjeux de sécurité majeurs » (p. 159). Or, « les Jeux olympiques ne seront pas un lieu d’expérimentation : ces technologies devront être déjà éprouvées, notamment à l’occasion de la coupe de monde de Rugby de 2023 » (p. 159).

    En juillet 2019, le rapport parlementaire cité plus haut constatait que la Police nationale disposait de 30 drones et de 23 pilotes. En novembre 2020, le Livre blanc (p. 231) décompte 235 drones et 146 pilotes. En 14 mois, le nombre de drones et pilotes aura été multiplié par 7. Dès avril 2020, le ministère de l’Intérieur a publié un appel d’offre pour acquérir 650 drones de plus. Rappelons-le : ces dotations se sont faites en violation de la loi. Qu’en sera-t-il lorsque les drones seront autorisés par la loi « sécurité globale » ? Avec combien de milliers d’appareils volants devra-t-on bientôt partager nos rues ? Faut-il redouter, au cours des #JO de 2024, que des dizaines de drones soient attribués à la surveillance de chaque quartier de la région parisienne, survolant plus ou moins automatiquement chaque rue, sans répit, tout au long de la journée ?

    Les évolutions en matières de reconnaissance faciale invite à des projections encore plus glaçantes et irréelles. Dès 2016, nous dénoncions que le méga-fichier #TES, destiné à contenir le visage de l’ensemble de la population, servirait surtout, à terme, à généraliser la reconnaissance faciale à l’ensemble des activités policières : enquêtes, maintien de l’ordre, contrôles d’identité. Avec le port d’une caméra mobile par chaque brigade de police et de gendarmerie, tel que promis par Macron pour 2021, et la retransmission en temps réel permise par la loi « sécurité globale », ce rêve policier sera à portée de main : le gouvernement n’aura plus qu’à modifier unilatéralement son #décret_TES pour y joindre un système de reconnaissance faciale (exactement comme il avait fait en 2012 pour permettre la reconnaissance faciale à partir du TAJ qui, à lui seul, contient déjà 8 millions de photos). Aux robots dans le ciel s’ajouteraient des humains mutiques, dont le casque de réalité augmentée évoqué par le Livre Blanc, couplé à l’analyse d’image automatisée et aux tablettes numériques NEO, permettrait des contrôles systématiques et silencieux, rompus uniquement par la violence des interventions dirigées discrètement et à distance à travers la myriade de drones et de #cyborgs.

    En somme, ce Livre Blanc, dont une large partie est déjà transposée dans la proposition de loi sécurité globale, annonce le passage d’un #cap_sécuritaire historique : toujours plus de surveillance, plus de moyens et de pouvoirs pour la police et consorts, dans des proportions et à un rythme jamais égalés. De fait, c’est un #État_autoritaire qui s’affirme et se consolide à grand renfort d’argent public. Le Livre blanc propose ainsi de multiplier par trois le #budget dévolu au ministère de l’Intérieur, avec une augmentation de 6,7 milliards € sur 10 ans et de 3 milliards entre 2020 et 2025. Une provocation insupportable qui invite à réfléchir sérieusement au définancement de la police au profit de services publiques dont le délabrement plonge la population dans une #insécurité bien plus profonde que celle prétendument gérée par la police.

    https://www.laquadrature.net/2020/11/19/la-technopolice-moteur-de-la-securite-globale
    #France #Etat_autoritaire

    ping @isskein @karine4 @simplicissimus @reka @etraces

  • DHS Plans to Start Collecting Eye Scans and DNA
    https://theintercept.com/2020/11/17/dhs-biometrics-dna

    As the agency plans to collect more biometrics, including from U.S. citizens, Northrop Grumman is helping build the infrastructure. Through a little-discussed potential bureaucratic rule change, the Department of Homeland Security is planning to collect unprecedented levels of biometric information from immigration applicants and their sponsors — including U.S. citizens. While some types of applicants have long been required to submit photographs and fingerprints, a rule currently under (...)

    #NorthropGrumman #Clearview #ICE #DHS #BAE_ #FBI #CBP #biométrie #migration #données #facial #reconnaissance #iris #empreintes #génétique #surveillance (...)

    ##EFF

  • Police are using fingerprint scanners to target Black Britons
    https://www.wired.co.uk/article/police-fingerprint-scan-uk

    Police use of mobile fingerprint scanners is soaring. But as with stop-and-search, stop-and-scan is disproportionately being used against ethnic minorities Three quarters of police forces in England and Wales now have access to mobile fingerprint scanners issued by the Home Office, new data reveals. In total, 28 of 43 police forces have started using the Strategic Mobile solution technology since it was first trialled, with four conducting their own pilot tests and seven other forces in (...)

    #scanner #activisme #biométrie #empreintes

  • Le Parlement européen dévoile un nouveau système de pointage biométrique
    https://www.euractiv.fr/section/economie/news/exclusive-parliament-documents-reveal-new-biometric-attendance-system

    Le Parlement européen veut mettre en place un système de pointage biométrique pour les députés européens participant à des réunions dans ses locaux de Bruxelles, révèlent des documents internes consultés par EURACTIV. Un nouveau « registre central de présence biométrique » permettant de pointer la participation des députés aux réunions parlementaires a été approuvé par le Bureau du Parlement européen. Grâce à cette technologie, ces derniers percevront « automatiquement » leur indemnité journalière pour leur (...)

    #biométrie #empreintes #surveillance #travail

  • Amazon introduit le paiement sans contact avec la paume de la main
    https://geeko.lesoir.be/2020/09/29/amazon-introduit-le-paiement-sans-contact-avec-la-paume-de-la-main

    La paume à la place de la carte bancaire : Amazon a dévoilé mardi une technologie biométrique sans contact pour permettre aux clients de payer en magasin d’un simple mouvement de la main. Ce service, intitulé Amazon One, sera dans un premier temps déployé dans deux des supérettes Amazon Go situées à Seattle, le siège de la compagnie. Le groupe de Jeff Bezos compte ajouter la technologie à ses autres épiceries aux États-Unis (à Chicago, San Francisco et New York, en plus des autres points à Seattle) et (...)

    #Amazon #algorithme #payement #reconnaissance #empreintes

  • Big Data has allowed ICE to dramatically expand its deportation efforts.
    https://slate.com/technology/2020/09/palantir-ice-deportation-immigrant-surveillance-big-data.html

    A New Mexico man gets a call from federal child welfare officials. His teenage brother has arrived alone at the border after traveling 2,000 miles to escape a violent uncle in Guatemala. The officials ask him to take custody of the boy. He hesitates ; he is himself undocumented. The officials say not to worry. He agrees and gives the officials his information. Seven months later, ICE agents arrest him at his house and start deportation proceedings. A family in suburban Maryland gets a (...)

    #Palantir #CBP #ICE #algorithme #biométrie #migration #facial #reconnaissance #BigData #conducteur·trice·s #empreintes (...)

    ##surveillance

  • Machine-Readable Refugees

    Hassan (not his real name; other details have also been changed) paused mid-story to take out his wallet and show me his ID card. Its edges were frayed. The grainy, black-and-white photo was of a gawky teenager. He ran his thumb over the words at the top: ‘Jamhuri ya Kenya/Republic of Kenya’. ‘Somehow,’ he said, ‘no one has found out that I am registered as a Kenyan.’

    He was born in the Kenyan town of Mandera, on the country’s borders with Somalia and Ethiopia, and grew up with relatives who had escaped the Somali civil war in the early 1990s. When his aunt, who fled Mogadishu, applied for refugee resettlement through the United Nations High Commissioner for Refugees, she listed Hassan as one of her sons – a description which, if understood outside the confines of biological kinship, accurately reflected their relationship.

    They were among the lucky few to pass through the competitive and labyrinthine resettlement process for Somalis and, in 2005, Hassan – by then a young adult – was relocated to Minnesota. It would be several years before US Citizenship and Immigration Services introduced DNA tests to assess the veracity of East African refugee petitions. The adoption of genetic testing by Denmark, France and the US, among others, has narrowed the ways in which family relationships can be defined, while giving the resettlement process the air of an impartial audit culture.

    In recent years, biometrics (the application of statistical methods to biological data, such as fingerprints or DNA) have been hailed as a solution to the elusive problem of identity fraud. Many governments and international agencies, including the UNHCR, see biometric identifiers and centralised databases as ways to determine the authenticity of people’s claims to refugee and citizenship status, to ensure that no one is passing as someone or something they’re not. But biometrics can be a blunt instrument, while the term ‘fraud’ is too absolute to describe a situation like Hassan’s.

    Biometrics infiltrated the humanitarian sector after 9/11. The US and EU were already building centralised fingerprint registries for the purposes of border control. But with the start of the War on Terror, biometric fever peaked, most evidently at the borders between nations, where the images of the terrorist and the migrant were blurred. A few weeks after the attacks, the UNHCR was advocating the collection and sharing of biometric data from refugees and asylum seekers. A year later, it was experimenting with iris scans along the Afghanistan/Pakistan frontier. On the insistence of the US, its top donor, the agency developed a standardised biometric enrolment system, now in use in more than fifty countries worldwide. By 2006, UNHCR agents were taking fingerprints in Kenya’s refugee camps, beginning with both index fingers and later expanding to all ten digits and both eyes.

    Reeling from 9/11, the US and its allies saw biometrics as a way to root out the new faceless enemy. At the same time, for humanitarian workers on the ground, it was an apparently simple answer to an intractable problem: how to identify a ‘genuine’ refugee. Those claiming refugee status could be crossed-checked against a host country’s citizenship records. Officials could detect refugees who tried to register under more than one name in order to get additional aid. Biometric technologies were laden with promises: improved accountability, increased efficiency, greater objectivity, an end to the heavy-handed tactics of herding people around and keeping them under surveillance.

    When refugees relinquish their fingerprints in return for aid, they don’t know how traces of themselves can travel through an invisible digital architecture. A centralised biometric infrastructure enables opaque, automated data-sharing with third parties. Human rights advocates worry about sensitive identifying information falling into thehands of governments or security agencies. According to a recent privacy-impact report, the UNHCR shares biometric data with the Department of Homeland Security when referring refugees for resettlement in the US. ‘The very nature of digitalised refugee data,’ as the political scientist Katja Jacobsen says, ‘means that it might also become accessible to other actors beyond the UNHCR’s own biometric identity management system.’

    Navigating a complex landscape of interstate sovereignty, caught between host and donor countries, refugee aid organisations often hold contradictory, inconsistent views on data protection. UNHCR officials have long been hesitant about sharing information with the Kenyan state, for instance. Their reservations are grounded in concerns that ‘confidential asylum-seeker data could be used for non-protection-related purposes’. Kenya has a poor record of refugee protection. Its security forces have a history of harassing Somalis, whether refugees or Kenyan citizens, who are widely mistrusted as ‘foreigners’.

    Such well-founded concerns did not deter the UNHCR from sharing data with, funding and training Kenya’s Department of Refugee Affairs (now the Refugee Affairs Secretariat), which since 2011 has slowly and unevenly taken over refugee registration in the country. The UNHCR hasconducted joint verification exercises with the Kenyan government to weed out cases of double registration. According to the anthropologist Claire Walkey, these efforts were ‘part of the externalisation of European asylum policy ... and general burden shifting to the Global South’, where more than 80 per cent of the world’s refugees live. Biometrics collected for protection purposes have been used by the Kenyan government to keep people out. Tens of thousands of ethnic Somali Kenyan citizens who have tried to get a Kenyan national ID have been turned away in recent years because their fingerprints are in the state’s refugee database.

    Over the last decade, biometrics have become part of the global development agenda, allegedly a panacea for a range of problems. One of the UN’s Sustainable Development Goals is to provide everyone with a legal identity by 2030. Governments, multinational tech companies and international bodies from the World Bank to the World Food Programme have been promoting the use of digital identity systems. Across the Global South, biometric identifiers are increasingly linked to voting, aid distribution, refugee management and financial services. Countries with some of the least robust privacy laws and most vulnerable populations are now laboratories for experimental tech.

    Biometric identifiers promise to tie legal status directly to the body. They offer seductively easy solutions to the problems of administering large populations. But it is worth asking what (and who) gets lost when countries and international bodies turn to data-driven, automated solutions. Administrative failures, data gaps and clunky analogue systems had posed huge challenges for people at the mercy of dispassionate bureaucracies, but also provided others with room for manoeuvre.

    Biometrics may close the gap between an ID and its holder, but it opens a gulf between streamlined bureaucracies and people’s messy lives, their constrained choices, their survival strategies, their hopes for a better future, none of which can be captured on a digital scanner or encoded into a database.

    https://www.lrb.co.uk/blog/2020/september/machine-readable-refugees
    #biométrie #identité #réfugiés #citoyenneté #asile #migrations #ADN #tests_ADN #tests_génétiques #génétique #nationalité #famille #base_de_donnée #database #HCR #UNHCR #fraude #frontières #contrôles_frontaliers #iris #technologie #contrôle #réinstallation #protection_des_données #empreintes_digitales #identité_digitale

    ping @etraces @karine4
    via @isskein

  • Eight case studies on regulating biometric technology show us a path forward
    https://www.technologyreview.com/2020/09/04/1008164/ai-biometric-face-recognition-regulation-amba-kak

    A new report from the AI Now Institute reveals how different regulatory approaches work or fall short in protecting communities from surveillance. Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasn’t long, Kak remembers, before stories about its devastating consequences began to (...)

    #Clearview #Facebook #biométrie #migration #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #consentement #données #facial #reconnaissance #iris #Aadhaar #discrimination (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##empreintes ##pauvreté

  • Automated suspicion: The EU’s new travel surveillance initiatives

    This report examines how the EU is using new technologies to screen, profile and risk-assess travellers to the Schengen area, and the risks this poses to civil liberties and fundamental rights.

    By developing ‘interoperable’ biometric databases, introducing untested profiling tools, and using new ‘pre-crime’ watchlists, people visiting the EU from all over the world are being placed under a veil of suspicion in the name of enhancing security.

    Watch the animation below for an overview of the report. A laid-out version will be available shortly. You can read the press release here: https://www.statewatch.org/news/2020/july/eu-to-deploy-controversial-technologies-on-holidaymakers-and-business-tr

    –----

    Executive summary

    The ongoing coronavirus pandemic has raised the possibility of widespread surveillance and location tracking for the purpose of disease control, setting alarm bells ringing amongst privacy advocates and civil rights campaigners. However, EU institutions and governments have long been set on the path of more intensive personal data processing for the purpose of migration control, and these developments have in some cases passed almost entirely under the radar of the press and civil society organisations.

    This report examines, explains and critiques a number of large-scale EU information systems currently being planned or built that will significantly extend the collection and use of biometric and biographic data taken from visitors to the Schengen area, made up of 26 EU member states as well as Iceland, Liechtenstein, Norway and Switzerland. In particular, it examines new systems being introduced to track, analyse and assess the potential security, immigration or public health risks posed by non-EU citizens who have to apply for either a short-stay visa or a travel authorisation – primarily the #Visa_Information_System (#VIS), which is being upgraded, and the #European_Travel_Information_and_Authorisation_System (#ETIAS), which is currently under construction.

    The visa obligation has existed for years. The forthcoming travel authorisation obligation, which will cover citizens of non-EU states who do not require a visa, is new and will massively expand the amount of data the EU holds on non-citizens. It is the EU’s equivalent of the USA’s ESTA, Canada’s eTA and Australia’s ETA.[1] These schemes represent a form of “government permission to travel,” to borrow the words of Edward Hasbrouck,[2] and they rely on the extensive processing of personal data.

    Data will be gathered on travellers themselves as well as their families, education, occupation and criminal convictions. Fingerprints and photographs will be taken from all travellers, including from millions of children from the age of six onwards. This data will not just be used to assess an individual’s application, but to feed data mining and profiling algorithms. It will be stored in large-scale databases accessible to hundreds of thousands of individuals working for hundreds of different public authorities.

    Much of this data will also be used to feed an enormous new database holding the ‘identity data’ – fingerprints, photographs, names, nationalities and travel document data – of non-EU citizens. This system, the #Common_Identity_Repository (#CIR), is being introduced as part of the EU’s complex ‘interoperability’ initiative and aims to facilitate an increase in police identity checks within the EU. It will only hold the data of non-EU citizens and, with only weak anti-discrimination safeguards in the legislation, raises the risk of further entrenching racial profiling in police work.

    The remote monitoring and control of travellers is also being extended through the VIS upgrade and the introduction of ETIAS. Travel companies are already obliged to check, prior to an individual boarding a plane, coach or train, whether they have the visa required to enter the Schengen area. This obligation will be extended to include travel authorisations, with travel companies able to use the central databases of the VIS and ETIAS to verify whether a person’s paperwork is in order or not. When people arrive at the Schengen border, when they are within the Schengen area and long after they leave, their personal data will remain stored in these systems and be available for a multitude of further uses.

    These new systems and tools have been presented by EU institutions as necessary to keep EU citizens safe. However, the idea that more personal data gathering will automatically lead to greater security is a highly questionable claim, given that the authorities already have problems dealing with the data they hold now.

    Furthermore, a key part of the ‘interoperability’ agenda is the cross-matching and combination of data on tens of millions of people from a host of different databases. Given that the EU’s databases are already-known to be strewn with errors, this massively increases the risks of mistakes in decision making in a policy field – immigration – that already involves a high degree of discretion and which has profound implications for peoples’ lives.

    These new systems have been presented by their proponents as almost-inevitable technological developments. This is a misleading idea which masks the political and ethical judgments that lie behind the introduction of any new technology. It would be fairer to say that EU lawmakers have chosen to introduce unproven, experimental technologies – in particular, automated profiling – for use on non-EU citizens, who have no choice in the matter and are likely to face difficulties in exercising their rights.

    Finally, the introduction of new databases designed to hold data on tens of millions of non-citizens rests on the idea that our public authorities can be trusted to comply with the rules and will not abuse the new troves of data to which they are being given access. Granting access to more data to more people inevitably increases the risk of individual abuses. Furthermore, the last decade has seen numerous states across the EU turn their back on fundamental rights and democratic standards, with migrants frequently used as scapegoats for society’s ills. In a climate of increased xenophobia and social hostility to foreigners, it is extremely dangerous to assert that intrusive data-gathering will counterbalance a supposed threat posed by non-citizens.

    Almost all the legislation governing these systems has now been put in place. What remains is for them to be upgraded or constructed and put into use. Close attention should be paid by lawmakers, journalists, civil society organisations and others to see exactly how this is done. If all non-citizens are to be treated as potential risks and assessed, analysed, monitored and tracked accordingly, it may not be long before citizens come under the same veil of suspicion.

    https://www.statewatch.org/automated-suspicion-the-eu-s-new-travel-surveillance-initiatives

    #vidéo:
    https://vimeo.com/437830786

    #suspects #suspicion #frontières #rapport #StateWatch #migrations #asile #réfugiés #EU #UE #Union_européenne
    #surveillance #profiling #database #base_de_données #données_personnelles #empreintes_digitales #enfants #agences_de_voyage #privatisation #interopérabilité

    ping @mobileborders @isskein @etraces @reka

  • EU data watchdog to ‘convince’ Commission to ban automated recognition tech
    https://www.euractiv.com/section/digital/news/eu-data-watchdog-argues-for-moratorium-on-recognition-technology

    Automated recognition technologies in public spaces should be temporarily banned, the EU’s institutional data protection watchdog has said, arguing in favour of a moratorium. Applications that should be outlawed for a limited period of time not only include facial recognition technologies but also software that captures “gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals,” the European Data Protection Supervisor said on Tuesday (30 June). EDPS head Wojciech (...)

    #algorithme #CCTV #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #biométrie #génétique #données #facial #reconnaissance #vidéo-surveillance #clavier #comportement #empreintes (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##marche ##surveillance ##voix

  • Surveillance of minority Muslims in southern Thailand is powered by Chinese-style tech
    https://www.codastory.com/authoritarian-tech/surveillance-muslims-thailand

    Mandatory biometric registration has left many Malay Muslims distrustful of the state and concerned about how new technologies will impact their lives When Arief’s cell phone service was cut off, it came as no surprise. He had refused to visit the local branch of his mobile provider and give his fingerprints and a facial scan, in order to register his SIM card. He did so as a matter of principle, to show his opposition to what many believe to be increasing intrusions into the lives of Malay (...)

    #algorithme #CCTV #SIM #biométrie #génétique #racisme #données #facial #FAI #reconnaissance #vidéo-surveillance #BigData #empreintes #Islam #profiling #surveillance #discrimination #HumanRightsWatch #Mengvii (...)

    ##Face++

  • IDemia
    https://technopolice.fr/idemia

    Historique Idemia est une entreprise française qui se présente comme le « leader de l’identité augmentée ». Elle est née en 2017 de la fusion de Morpho (Safran), considéré comme chef de file mondial d’identification biométrique et d’Oberthur Technologies, spécialisé dans la fabrication de carte à puce et de documents d’identité. Idemia se veut leader d’identification biométrique ainsi que des paiements sécurités. Aujourd’hui, la société possède des références dans l’identification criminelle (avec le FBI, (...)

    #Idemia #Interpol #Morpho #NYPD #Safran #carte #PARAFE #biométrie #facial #reconnaissance #vidéo-surveillance #Aadhaar #empreintes #frontières #sport #surveillance (...)

    ##CNIL

  • Scaled-up surveillance : the EU builds a massive biometric database
    https://www.codastory.com/authoritarian-tech/eu-border-patrol-technology

    An upcoming biometrics repository will hold the records of 300 million people and could hand a potentially powerful surveillance tool to its member states At its headquarters in the Estonian capital of Tallinn, on the Baltic Coast, a little-known European Union body is building one of the world’s largest biometric identity databases. The Central Identity Repository (CIR) is designed to hold the records of 300 million people and will be the centerpiece of a new, integrated system that (...)

    #CentralIdentityRepository-CIR #empreintes #BigData #facial #migration

  • A Single Company Will Now Operate Facial Recognition for Nearly 800 Million People
    https://onezero.medium.com/idemia-will-operate-facial-recognition-for-nearly-800-million-people

    Idemia just scored a major new contract with the EU Idemia, a French company specializing in facial, fingerprint, and iris recognition, just scored a new contract with the European Union that will include processing images attached to more than 400 million people’s identities. The company’s algorithms will verify the identity of EU residents who were born elsewhere and work for non-EU companies as they enter from external borders. Idemia doesn’t have direct access to this data as an (...)

    #Idemia #NYPD #Thalès #algorithme #passeport #CCTV #payement #biométrie #migration #facial #reconnaissance #iris #empreintes #frontières (...)

    ##BigData

  • Amid coronavirus, USC is requiring facial recognition scans of students living on campus, but the technology sparks controversy
    http://www.uscannenbergmedia.com/2020/05/15/amid-coronavirus-usc-is-requiring-facial-recognition-scans-of-stu

    Amid coronavirus, USC is requiring facial recognition scans of students living on campus, but the technology sparks controversy The facial recognition device at USC also links to a questionable South Korean company that boasts contracts with government agencies. In the wake of the COVID-19 pandemic, USC has deactivated the fingerprint scanners in its residential halls, requiring students remaining in some dorms to use facial recognition technology to gain access to their rooms. While the (...)

    #algorithme #CCTV #biométrie #consentement #facial #reconnaissance #vidéo-surveillance #BigData #COVID-19 #empreintes #enseignement #santé #surveillance #ACLU (...)

    ##santé ##Fight_for_the_Future

  • Under Surveillance
    https://edri.org/comic-book/en/ldh-english/ldh-english/ldh-english.pdf

    In 2010, European Digital Rights, a Brussels based non-profit that campaigns on issues relating to privacy and freedom of expression, published “Under Surveillance,” which deals with the subjects of data protection, counter-terror measures and privacy.

    #NSA #algorithme #passeport #CCTV #RFID #activisme #anti-terrorisme #vidéo-surveillance #biométrie #BigData #Echelon #empreintes #frontières #surveillance #LDH-France