These cases show we can’t trust algorithms to clean up the internet

/when-filters-fail

  • Julia Reda – When filters fail : These cases show we can’t trust algorithms to clean up the internet
    https://juliareda.eu/2017/09/when-filters-fail

    par Julia Reda

    9 exemples de malfonctionnement des détecteurs de contenu automatique utilisés par YouTube. Le droit, c’est aussi du contexte, ce que les algorithmes ne peuvent pas prendre en compte (et qui mérite un débat contradictoire).

    The Commission now officially “strongly encourages online platforms to […] step up investment in, and use of, automatic detection technologies”. It wants platforms to make decisions about the legality of content uploaded by users without requiring a court order or even any human intervention at all: “online platforms should also be able to take swift decisions […] without being required to do so on the basis of a court order or administrative decision”.

    Installing censorship infrastructure that surveils everything people upload and letting algorithms make judgement calls about what we all can and cannot say online is an attack on our fundamental rights.

    But there’s another key question: Does it even work? The Commission claims that where automatic filters have already been implemented voluntarily – like YouTube’s Content ID system – “these practices have shown good results”.

    Oh, really? Here are examples of filters getting it horribly wrong, ranging from hilarious to deeply worrying:

    #Copyright #YouTube #Filtrage_automatique #Europe #Droit_auteur