EU lawmakers bag late night deal on ’global first’ AI rules

/eu-ai-act-political-deal

  • EU lawmakers bag late night deal on ‘global first’ AI rules | TechCrunch
    https://techcrunch.com/2023/12/08/eu-ai-act-political-deal

    Tout l’article est très intéressant.

    Full details of what’s been agreed won’t be entirely confirmed until a final text is compiled and made public, which may take some weeks. But a press release put out by the European Parliament confirms the deal reached with the Council includes a total prohibition on the use of AI for:

    biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
    untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
    emotion recognition in the workplace and educational institutions;
    social scoring based on social behaviour or personal characteristics;
    AI systems that manipulate human behaviour to circumvent their free will;
    AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

    The use of remote biometric identification technology in public places by law enforcement has not been completely banned — but the parliament said negotiators had agreed on a series of safeguards and narrow exceptions to limit use of technologies such as facial recognition. This includes a requirement for prior judicial authorisation — and with uses limited to a “strictly defined” lists of crime.

    Civil society groups have reacted sceptically — raising concerns the agreed limitations on state agencies’ use of biometric identification technologies will not go far enough to safeguard human rights. Digital rights group EDRi, which was among those pushing for a full ban on remote biometrics, said that whilst the deal contains “some limited gains for human rights”, it looks like “a shell of the AI law Europe really needs”.

    There was also agreement on a “two-tier” system of guardrails to be applied to “general” AI systems, such as the so-called foundational models underpinning the viral boom in generative AI applications like ChatGPT.

    As we reported earlier, the deal reached on foundational models/general purpose AIs (GPAIs) includes some transparency requirements for what co-legislators referred to as “low tier” AIs — meaning model makers must draw up technical documentation and produce (and publish) detailed summaries about the content used for training in order to support compliance with EU copyright law. For “high-impact” GPAIs (defined as the cumulative amount of compute used for their training measured in floating point operations is greater than 10^25) with so-called “systemic risk” there are more stringent obligations.

    “If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency,” the parliament wrote. “MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.”

    The Commission has been working with industry on a stop-gap AI Pact for some months — and it confirmed today this is intended to plug the practice gap until the AI Act comes into force.

    While foundational models/GPAIs that have been commercialized face regulation under the Act, R&D is not intended to be in scope of the law — and fully open sourced models will have lighter regulatory requirements than closed source, per today’s pronouncements.

    The package agreed also promotes regulatory sandboxes and real-world-testing being established by national authorities to support startups and SMEs to develop and train AIs before placement on the market.

    #Intelligence_artificielle #AIAct #Europe #Régulation