• A Privacy-Focused Vision for Social Networking | Mark Zuckerberg, Facebook, 6 mars 2019
    https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634

    Over the last 15 years, Facebook and Instagram have helped people connect with friends, communities, and interests in the digital equivalent of a town square. But people increasingly also want to connect privately in the digital equivalent of the living room. As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

    Today we already see that private messaging, ephemeral stories, and small groups are by far the fastest growing areas of online communication. There are a number of reasons for this. Many people prefer the intimacy of communicating one-on-one or with just a few friends. People are more cautious of having a permanent record of what they’ve shared. And we all expect to be able to do things like payments privately and securely.

    Public social networks will continue to be very important in people’s lives — for connecting with everyone you know, discovering new people, ideas and content, and giving people a voice more broadly. People find these valuable every day, and there are still a lot of useful services to build on top of them. But now, with all the ways people also want to interact privately, there’s also an opportunity to build a simpler platform that’s focused on privacy first.

    I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform — because frankly we don’t currently have a strong reputation for building privacy protective services, and we’ve historically focused on tools for more open sharing. But we’ve repeatedly shown that we can evolve to build the services that people really want, including in private messaging and stories.

    I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about.
    We plan to build this the way we’ve developed WhatsApp: focus on the most fundamental and private use case — messaging — make it as secure as possible, and then build more ways for people to interact on top of that, including calls, video chats, groups, stories, businesses, payments, commerce, and ultimately a platform for many other kinds of private services.

    This privacy-focused platform will be built around several principles:
    Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.
    Encryption. People’s private communications should be secure. End-to-end encryption prevents anyone — including us — from seeing what people share on our services.

    Reducing Permanence. People should be comfortable being themselves, and should not have to worry about what they share coming back to hurt them later. So we won’t keep messages or stories around for longer than necessary to deliver the service or longer than people want them.

    Safety. People should expect that we will do everything we can to keep them safe on our services within the limits of what’s possible in an encrypted service.

    Interoperability. People should be able to use any of our apps to reach their friends, and they should be able to communicate across networks easily and securely.

    Secure data storage. People should expect that we won’t store sensitive data in countries with weak records on human rights like privacy and freedom of expression in order to protect data from being improperly accessed.

    Over the next few years, we plan to rebuild more of our services around these ideas. The decisions we’ll face along the way will mean taking positions on important issues concerning the future of the internet. We understand there are a lot of tradeoffs to get right, and we’re committed to consulting with experts and discussing the best way forward. This will take some time, but we’re not going to develop this major change in our direction behind closed doors. We’re going to do this as openly and collaboratively as we can because many of these issues affect different parts of society.

    Résumé en français : « Mark Zuckerberg veut recentrer Facebook sur les échanges privés » https://www.lesechos.fr/tech-medias/hightech/0600849596938-mark-zuckerberg-veut-recentrer-facebook-sur-les-echanges-priv

    • « Welcome to Mark Zuckerberg’s information ghetto », lis-je dans la « Fake Newsletter » de Buzzfeed :

      (…) More than anything, though, I think it’s a response to the central problem that has plagued Facebook for years: Its scale. More than two billion people log into it every month, all around the world. They upload and interact with more content than humanity ever conceived of creating.

      Zuckerberg and his leadership team may have come to the realization that they achieved a truly unmanageable scale.

      They need to find ways to offer people value (and keep them on them platform) while reducing the overall amount of what I’ll call Addressable Content. This is content that’s publicly accessible on Facebook and could require review by a content moderator, or be the subject of takedown requests from governments or other entities.

      Addressable Content costs Facebook money and can result in regulation, harm to moderators, public outcry, and lawsuits.

      Zuckerberg’s new focus will reduce the total amount of Addressable Content by enabling content that disappears, that is encrypted end to end, and that only reaches a small group of people.

      Facebook will still have huge amounts of public content, and it will always need moderators. But by shifting content production and interaction out of more public spaces on the platform, the company can get its costs and controversies under control. It can manage its scale, while still collecting a motherlode of data on its users and serving them ads.

      Zuck’s plan could be a great business solution, unlocking more growth for Facebook at a time when one can reasonably wonder how, without access to China, it can continue to grow.

      But it’s also a solution that will push all that false, conspiratorial, violent, harmful, and hateful content off into information ghettos where journalists, researchers, and watchdogs will have a much more difficult time finding it and calling it out. — Craig

      Encore des articles sur la #modération (une partie du #CM)

      The secret lives of Facebook moderators in America
      https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-

    • Facebook’s pivot to privacy is missing something crucial https://www.wired.com/story/facebook-zuckerberg-privacy-pivot

      Zuckerberg listed six privacy principles, but there was one glaring omission: He said nothing about how Facebook plans to approach data sharing and ad targeting in this privacy-focused future. The free flow of data between Facebook and third-party developers is, after all, the issue that caused the jaws of the national media to snap onto the company’s leg. One year ago this month, news broke that a man named Aleksandr Kogan had misappropriated the data of tens of millions of users and sent it to a shady political consulting firm called Cambridge Analytica. It soon became clear that Cambridge Analytica was not alone and that Facebook had allowed thousands of developers to collect data for years.

      The company’s loose policies on data collection over the years are also what allowed it to build one of the most successful advertising businesses in history. All the data the company collects helps advertisers segment and target people. And it’s the relentless pursuit of that data that has led to Facebook being accused of making inappropriate deals for data with device manufacturers and software partners. This is a history that Zuckerberg knows well, and one that he acknowledged in his post. “I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform—because frankly we don’t currently have a strong reputation for building privacy protective services,” he wrote.

  • The secret lives of Facebook moderators in America
    by Casey Newton for TheVerge, on 25th feb 2019
    https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-

    The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk.

    Fearing for his safety, Randy began bringing a concealed gun to work. Fired employees regularly threatened to return to work and harm their old colleagues, and Randy believed that some of them were serious. A former coworker told me she was aware that Randy brought a gun to work, and approved of it, fearing on-site security would not be sufficient in the case of an attack.

    Miguel is also allotted nine minutes per day of “wellness time,” which he is supposed to use if he feels traumatized and needs to step away from his desk. Several moderators told me that they routinely used their wellness time to go to the restroom when lines were shorter. But management eventually realized what they were doing, and ordered employees not to use wellness time to relieve themselves

    At the Phoenix site, Muslim workers who used wellness time to perform one of their five daily prayers were told to stop the practice and do it on their other break time instead

    Cognizant employees are told to cope with the stress of the jobs by visiting counselors, when they are available; by calling a hotline; and by using an employee assistance program, which offers a handful of therapy sessions.

    They [6 employees] told me they coped with the stress of the job in other ways: with sex, drugs, and offensive jokes.

    “You get really close to your coworkers really quickly,” she says. “If you’re not allowed to talk to your friends or family about your job, that’s going to create some distance. You might feel closer to these people. It feels like an emotional connection, when in reality you’re just trauma bonding."

    As an ethnic minority, Li was a frequent target of his coworkers, and he embraced what he saw as good-natured racist jokes at his expense, he says.

    After the Parkland shooting last year, moderators were initially horrified by the attacks. But as more conspiracy content was posted to Facebook and Instagram, some of Chloe’s colleagues began expressing doubts [...] "People started Googling things instead of doing their jobs and looking into conspiracy theories about them. We were like, ‘Guys, no, this is the crazy stuff we’re supposed to be moderating. What are you doing?’”

    “A lot of people don’t actually make it through the training,” she says. “They go through those four weeks and then they get fired. They could have had that same experience that I did, and had absolutely no access to counselors after that.”

    “There was nothing that they [the counselors] were doing for us,” Li says, “other than expecting us to be able to identify when we’re broken. Most of the people there that are deteriorating — they don’t even see it. "

    Last week, after I told Facebook about my conversations with moderators, the company invited me to Phoenix to see the site for myself. [...] The day before I arrived at the office park where Cognizant resides, one source tells me, new motivational posters were hung up on the walls. [...] After meetings with executives from Cognizant and Facebook, I interview five workers who had volunteered to speak with me. They stream into a conference room, along with the man who is responsible for running the site. With their boss sitting at their side, employees acknowledge the challenges of the job but tell me they feel safe, supported, and believe the job will lead to better-paying opportunities — within Cognizant, if not Facebook.

    They [two counselors includong the doctor who set up on-site counseling] tell me that they check in with every employee every day. They say that the combination of on-site services, a hotline, and an employee assistance program are sufficient to protect workers’ well-being.

    “If we weren’t there doing that job, Facebook would be so ugly,” Li says. “We’re seeing all that stuff on their behalf. And hell yeah, we make some wrong calls. But people don’t know that there’s actually human beings behind those seats.”

    That people don’t know there are human beings doing this work is, of course, by design. Facebook would rather talk about its advancements in artificial intelligence, and dangle the prospect that its reliance on human moderators will decline over time.

    They [moderators] do the work as long as they can — and when they leave, an NDA ensures that they retreat even further into the shadows.

    To Facebook, it will seem as if they never worked there at all. Technically, they never did.

    • Article réaliste sur les conditions de travail des modérateurs avec des spécificités qui semblent être des spécificités américaines ou propres à l’employeur.
      En Allemagne la vidéo qui choque tous les stagiaires est une vidéo de zoophiles, il y a aussi des scènes de meurtres ritualisées comme les exécutions ISIS mais rien d’aussi traumatisant que la vidéo qui est décrite ici.
      Le but est quand même que les modérateurs aillent au bout de leur parcours d’apprentissage.
      Une fois en poste, le travail commence par le traitement de collections types et se poursuit par du shadowing actif et passif. La montée en horreur est quasi immédiate mais accompagnée.
      Il faut garder à l’esprit que c’est un job qui ne requiert aucune compétence. La publicité des conditions rend le recrutement plus difficile et élargit le type des candidats éligibles.
      Des gens qui sont dans des situations économiques plus fragiles, plus de femmes et plus de minorités et des candidats plus agés ^^.

      Après que ce soit un taff qui rende les modérateurs complotistes j’ai comme un doute, je pense que le terrain était déjà très favorable.

      Quand un « collègue » parle de quartiers islamistes en France... FB ou BFM ?

      « Du coup illes suppriment toujours les trucs violents et porn, mais ne font rien sur ces contenus complotistes. » Ca par contre est un non sens, tous travail consiste à appliquer une policy battit par FB, le complotisme n’y figure pas et dans l’exemple cité par l’article, les autistes ne représente pas une catégorie ou un groupe protégé. Arrêtons de croire que les modérateurs sont en mode freestyle.

    • Bon je crois je veux continuer de tout ignorer du monde dépeint dans cet article.

      Je suis arrivé jusqu’à

      “Autistic people should be sterilized” seems offensive to him, but it stays up as well. Autism is not a “protected characteristic” the way race and gender are, and so it doesn’t violate the policy. (“Men should be sterilized” would be taken down.)

      qui a agi comme une limite à ne pas dépasser.

      Pendant la lecture des deux tiers de cet article je n’ai pas cessé de me demander pourquoi est-ce qu’une telle tâche de modération (avec des critères aussi tordus) ne pouvait pas être, en fait gérer par de l’intelligence artificielle. Il ne me semble pas qu’un robot pourrait faire pire que l’exemple cité plus haut.

    • @philippe_de_jonckheere Tout simplement parsque toutes les informations doivent au préalable être « nettoyées » et formalisées. Les algorithmes sont encore incapables de prendre certaines décisions. Ce qui veut que de plus en plus l’action humaine va disparaitre au fur et à mesure de l’avancée algorithmique ce qui a mon sens est une mauvaise nouvelle. Le jour ou la modération ne sera que le fait de robot, bonjour pour avoir la moindre information sur la cuisine interne de FB.
      L’activité interne est moins signe de transparence que possibilté de fuites.
      Et puis :
      « Vers l’automatisation de la #Censure politique »
      https://seenthis.net/messages/762211

      D. Dalton, rapporteur sur le règlement antiterroriste, est sur le point d’autoriser la #Censure de masse
      https://seenthis.net/messages/755670

      Délégation de la censure aux géants du Web
      https://www.laquadrature.net/censureterro

      Abécédaire de la société de surveillance
      https://seenthis.net/messages/756098
      Qu’en sera t’il des relations FB avec des organismes bancaires et assureurs.

    • N’ayant jamais foutu les pieds sur cette saloperie de Facebook, en fait je m’en fous pas mal. Et pour ma part pour trouver quelqu’un intérêt que ce soit à Facebook, il faut déjà avoir une bonne part de soi robotisée. Un peu plus, un peu moins. Je suis nettement plus inquiet à propos des personnes qui doivent travailler à ce que tu appelles nettoyage que je ne le serais jamais pour un ou une utilisatrice de Facebook.

    • ❝Bob Duncan, qui supervise les opérations de modération de contenu de Cognizant en Amérique du Nord, explique que les recruteurs expliquent soigneusement la nature graphique du poste aux candidats. « Nous partageons des exemples du genre de choses que vous pouvez voir... pour qu’ils aient une compréhension », dit-il. « L’intention de tout cela est de s’assurer que les gens le comprennent. Et s’ils estiment que le travail ne leur convient pas, ils peuvent prendre les décisions qui s’imposent. »

      Entretien 5 mn.