• Open letter to the European Commission calling for clear regulatory red lines to prevent uses of artificial intelligence which violate core fundamental rights

    In 2020, EDRi, its members, and many other civil society organisations have investigated several harmful uses of artificial intelligence which, unless restricted, will have a severe implication on individual and collective rights and democracy.

    We believe that, in addition to safeguards which can hope to improve the process of AI design, development and deployment, there is a need for clear, regulatory red lines for uses of AI which are incompatible with our fundamental rights. From uses which enable mass surveillance, the overpolicing of racialised and migrant communities, and exacerbate existing power imbalances, such uses are impermissable must be curtailed in order to prevent abuse. We, in particular draw attention to the harmful impact of uses of AI at the border and in migration management.

    The European Commission has made indications that it is still considering regulatory red lines in some form as part of its AI regulatory proposal (expected Q1 2021). As such, we have prepared the attached open letter for publication on the 11th January.

    If you would like the name of your organisation to be attached to the letter, please let us know by 7th January 2021 (17.00 CET).

    The list of signatory organisations will be updated via this Etherpad: https://pad.riseup.net/p/r.dd8356ea3e6e1b74b6dde570440d359b

    https://edri.org/wp-content/uploads/2020/09/Case-studies-Impermissable-AI-biometrics-September-2020.pdf

    Contenu de la lettre:
    Open letter: Civil society call for AI red lines in the European Union’s Artificial Intelligence proposal

    We the undersigned, write to restate the vital importance of clear regulatory red lines to prevent uses of artificial intelligence which violate core fundamental rights. As we await the regulatory proposal on artificial intelligence this quarter, we emphasise that such measures form a necessary part of a fundamental rights-based artificial intelligence regulation.

    Europe has an obligation under the Charter of Fundamental Rights of the European Union to ensure that each person’s rights to privacy, data protection, free expression and assembly, non-discrimination, dignity and other fundamental rights are not unduly restricted by the use of new and emerging technologies. Without appropriate limitations on the use of AI-based technologies, we face the risk of violations of our rights and freedoms by governments and companies alike.

    Europe has the opportunity to demonstrate to the world that true innovation can arise only when we can be confident that everyone will be protected from the most harmful and the most egregious violations of fundamental rights. Europe’s industry - from AI developers to car manufacturing companies - will benefit greatly from the regulatory certainty that comes from clear legal limits and an even playing field for fair competition.

    Civil society across Europe - and the world - have called attention to the need for regulatory limits on deployments of artificial intelligence that can unduly restrict human rights. It is vital that the upcoming regulatory proposal unequivocally addresses the enabling of mass surveillance and monitoring public spaces; exacerbating structural discrimination, exclusion and collective harms; impeding access to vital services such as health-care and social security; impeding fair access to justice and procedural rights; uses of systems which make inferences and predictions about our most sensitive characteristics, behaviours and thoughts; and, crucially, the manipulation or control of human behaviour and the associated threats to human dignity, agency, and collective democracy.

    In particular, we call attention to specific (but not exhaustive) examples of uses that, as our research has demonstrated, are incompatible with a democratic society, and must thus be prohibited or legally restricted in the AI legislation:

    Biometric mass surveillance:

    Uses of biometric surveillance technologies to process the indiscriminately or arbitrarily-collected data of people in public or publicly-accessible spaces (for example, remote facial recognition) creates a strong perception of mass surveillance and a ‘chilling effect’ on people’s fundamental rights and freedoms. In this resepct it is important to note that deployment of biometric mass surveillance in public or publicly accessible spaces brings along, per definition, indiscriminate processing of biometric data. Moreover, because of a psychological ‘chilling’ effect, people might feel inclined to adapt their behaviour to a certain norm. Thus, such use of biometric mass surveillance intrudes the psychological integrity and well-being of individuals, in addition to the violation of a vast range of fundamental rights. As emphasised in EU data protection legislation and case law, such uses are not necessary or proportionate to the aim sought, and should therefore be clearly prohibited in the AI legislation. This will ensure that law enforcement, national authorities and private entities cannot abuse the current wide margin of exception and discretion for national governments. Moreover, because of a psychological ‘chilling’ effect, people might feel inclined to adapt their behaviour to a certain norm. Thus, such use of biometric mass surveillance intrudes the psychological integrity and well-being of individuals,

    Predictive policing:

    Uses of predictive modelling to forecast where, and by whom, a narrow type of crimes are likely to be committed repeatedly score poor, working class, racialised and migrant communities with a higher likelihood of presumed future criminality. As highlighted by the European Parliament, deployment of such predictive policing can result in “grave misuse”. The use of apparently “neutral” factors such as postal code in practice serve as a proxy for race and other protected characteristics, reflecting histories of over-policing of certain communities, exacerbating racial biases and affording false objectivity to patterns of racial profiling. A number of predictive policing systems have been demonstrated to disproportionately include racialised people, in complete disaccord with actual crime rates. Predictive policing systems undermine the presumption of innocence and other due process rights by treating people as individually suspicious based on inferences about a wider group.

    Uses of AI at the border and in migration control:

    The increasing examples of AI deployment in the field of migration control pose a growing threat to the fundamental rights of migrants, to EU law, and to human dignity. Among other worrying use cases, AI is being tested to detect lies for the purposes of immigration applications at European borders and to monitor deception in English language tests through voice analysis, all of which lack credible scientific basis. In addition, EU migration policies are increasingly underpinned by the proposed or actual use of AI, such as facial recognition, algorithmic profiling and prediction tools within migration management processes, including for forced deporatation. All such uses infringe on data protection rights, the right to privacy, the right to non-discrimination, and several principles of international migration law, including the right to seek asylum. Furthermore, the significant power imbalance that such deployments exacerbate and exploit should trigger the strongest conditions possible for such systems in border and migration control.

    Social scoring and AI systems determining access to social rights and benefits

    AI systems have been deployed in various contexts threatening the allocation of social and economic rights and benefits. For example, in the areas of welfare resource allocation, eligibility assessment and fraud detection, the deployment of AI to predict risk greatly impacts people’s access to vital public services and has grave potential impact on the fundamental right to social security and social assistance. This is in particular due to the likelihood of discriminatory profiling, mistaken results and the inherent fundamental rights risks associated with processing of sensitive biometric data. A number of examples demonstrate how automated decision making systems are negatively impacting and targeting poor, migrant and working class people. In a famous case, the Dutch government deployed SyRI, a system to detect fraudulent behaviour by creating risk profiles of individual benefits claimants. Further,the Polish government has used data-driven systems to profile unemployed people, with severe implications for data protection and non-discrimination rights. Further, uses in the context of employment and education have highlighted severe instances of worker and student surveillance, as well as harmful systems involving social scoring with severe implications for fundamental rights.

    Use of risk assessment tools for offenders’ classification in the criminal justice system

    The use of algorithms in criminal justice matters to profile individuals within legal decision-making processes presents severe threats to fundamental rights. Such tools base their assessments on a vast collection of personal data unrelated to the defendants’ alleged misconduct. This collection of personal data for the purpose of predicting the risk of recidivism cannot be perceived as necessary nor proportional to the perceived purpose. Consequently, such interference with the right to respect for private life and the presumption of innocence cannot be considered necessary or proportionate. In addition, substantial evidence has shown that the introduction such systems in criminal justice systems in Europe and elsewhere has resulted in unjust and discriminatory outcomes. Beyond biased outcomes, it may be impossible for legal professionals,to understand the reasoning behind the outcomes of the system. For these reasons, we argue that legal limits must be imposed on AI risk assessment systems in the criminal justice context.

    These examples illustrate the need for an ambitious artificial intelligence proposal in 2021 which foregrounds people’s rights and freedoms. We look forward to a legislation which puts people first, and await to hear your response about how the AI proposal will address the concerns outlined in this letter. We thank you for your consideration, and are available at your convenience to discuss these issues should it be helpful

    #AI #intelligence_artificielle #lettre_ouverte #droits_fondamentaux #droits_humains

    ping @etraces

  • Comment la populaire #Marche pour le #Climat est devenue une campagne de relations publiques d’entreprises
    (et comment ça fait curieusement penser à ce qui se dénonce autour d’ #Alternatiba : en fait si on dénonce les supermarchés depuis longtemps, c’est p’t’être pas pour voir la même chose dans les causes ou campagnes !)

    #best_quote : "Néanmoins, pour citer Han Solo, « J’ai un mauvais pré-sentiment à ce sujet. » "

    « L’image (et non l’idéologie) vient en premier et façonne la réalité. Le PR et de marketing détermine les tactiques, la messagerie, l’organisation, et la stratégie. Que cela puisse avoir un effet positif est une question différente, et c’est pourquoi je vous encourage tous à participer. L’avenir est inconnaissable. »

    http://www.counterpunch.org/2014/09/19/how-the-peoples-climate-march-became-a-corporate-pr-campaign

  • Des liens pour les médias libres épisode 5
    http://atelier.mediaslibres.org/Des-liens-pour-les-medias-libres-5.html

    Quelques liens sur les médias alternatifs, des bidouilles, des infos, des nouveautés, etc. pour nous lier, nous rencontrer et échanger. A propos cette fois-ci d’un nouveau site d’info, de surveillance, d’histoire de la radio, d’aspects juridiques, d’islamophobie, et plein d’autres choses.

    Merci aux personnes qui nous ont fait passer des infos !
    Pour nous aider :
    – un pad pour noter des liens qui vous paraitraient intéressants pour cette sélection d’infos est disponible : ▻https://pad.riseup.net/p/Lienspourmediaslibres ;
    – ou sur Seenthis, vous pouvez utiliser le tag #lml ;
    – si vous avez trouvé intéressante cette page, ou certains des articles cités, merci de faire tourner !

    #lml

  • Des liens pour les médias libres 4
    http://atelier.mediaslibres.org/Des-liens-pour-les-medias-libres-4.html
    Quelques liens sur les médias alternatifs, des bidouilles, des infos, des nouveautés, etc. pour nous lier, nous rencontrer et échanger. A propos cette fois-ci d’un manifeste italien, de cartographie radicale, de radios libres qui fédèrent, de rencontres sur l’archivage et le rapport à l’image…

    Un pad pour noter rapidement des liens qui vous paraitraient intéressants pour cette sélection d’infos est disponible : https://pad.riseup.net/p/Lienspourmediaslibres.

    Ou sur Seenthis, utiliser le tag #lml. Merci aux personnes qui ont fait passer des infos !