• Tech CEOs Want Every Worker to Have a Permanent, Publicly-Available Job Performance File
    https://www.vice.com/en/article/n7zj9z/tech-ceos-want-every-worker-to-have-a-permanent-publicly-available-job-perform

    It is also in line with a growing trend among tech companies that, spurred by work-from-home and hybrid work, are increasingly interested in quantifying employee performance. The most prominent example is Coinbase introducing an app so employees can constantly rate each other’s performances, a scenario even the normally cheery TechCrunch said “sounds rough.”

    Over the last several years, there has been a boom in employee management software solutions such as Workday, Lattice, CultureAmp that are used across thousands of companies for performance reviews and other sensitive HR tasks. Technologically speaking, what Youakim and Hoffman are talking about is opening those confidential resources—or some condensed version of them that can be easily digested and analyzed—up to everyone. None of these HR software companies have indicated that they have any intention of doing this.

    Laszlo Bock, Google’s former head of people operations—its term for HR—once described hiring as “a complete random mess.” In a 2013 interview he said, “Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.” At its most charitable, the idea Hoffman and Youakim outlined would be to inject logic and statistical rigor to this random mess. But, experts who have studied hiring extensively draw a different conclusion, that it would allow this complete, random mess to follow workers their entire careers, affecting their job prospects, earning potential, and their broader lives.

    This is widely acknowledged in HR circles today. The Society for Human Resource Management, the main professional society for HR professionals, specifically says that job performance reviews are not trustworthy and often demonstrate bias.

    Carhart echoed these concerns. “Many companies’ employee performance management and assessment processes are ripe with bias, inaccuracies, and inconsistencies. And even while the HR industry has made great strides—and will continue to do so—to address these challenges, the fact remains that not all companies judge success and failure, good performance and poor performance, in the same way. They also aren’t measuring employees in the same way, on the same scale, or tracking the same metrics.”

    This echoed a point Hausknecht made about the co-op of data-sharing companies. It assumes people don’t change, that jobs require similar attributes, that a person’s experience at one company is relevant to another where they will be in a different environment with a different manager and different company culture. It assumes the problem to be solved is attracting the “right” worker, which is the wrong question. Google found that, as a result of its study, it was much easier and more effective to help underperforming managers improve rather than hiring new people. The problem, in other words, is not hiring more loyal workers, but fixing bad leaders.

    More to the point, the idea of scoring workers across companies, Carhart argues, is regressive, pushing the HR industry backwards. “Performance abilities should be based on alignment to company culture, quality of management, and opportunities for growth,” he said in an email. “The idea of having ’a score’ goes back to when HR was seen as having a one-directional function and purpose—to be a resource to the company—as opposed to where we are now, where HR and People functions are strategic business partners to help facilitate the kind of people success that ultimately drives business success.”

    Or, to put it a different way, “Just because we can track it, collect it, and ask about it,” Hausknecht said, “doesn’t necessarily mean we should.”

    #travail #Société_contrôle #surveillance

  • The Gnawing Anxiety of Having an Algorithm as a Boss - Bloomberg
    https://www.bloomberg.com/news/articles/2019-06-26/the-gnawing-anxiety-of-having-an-algorithm-as-a-boss

    I recently got the internet in my apartment fixed, and my technician had an unusual request. I’d get an automated call after he left asking me how satisfied I was with the service, he explained, and he wanted me to rate him 9 out of 10. I asked why, and he said there was a glitch with the system that recorded any 10 rating as a 1, and it was important for him to keep his rating up.

    Since then, a couple of people have told me that technicians working for the company have been making this exact request for at least two years. A representative for Spectrum, my internet provider, said they were worrying over nothing. The company had moved away from the 10-point rating system, he said, adding that customer feedback isn’t “tied to individual technicians’ compensation.”

    But even if the Spectrum glitch exists only in the lore of cable repairmen, the anxiety it’s causing them is telling. Increasingly, workers are impacted by automated decision-making systems, which also affects people who read the news, or apply for loans, or shop in stores. It only makes sense that they’d try to bend those systems to their advantage.

    There exist at least two separate academic papers with the title “Folk Theories of Social Feeds,” detailing how Facebook users divine what its algorithm wants, then try to use those theories to their advantage.

    People with algorithms for bosses have particular incentive to push back. Last month, a local television station in Washington covered Uber drivers who conspire to turn off their apps simultaneously in order to trick its system into raising prices.

    Alex Rosenblat, the author of Uberland, told me that these acts of digital disobedience are essentially futile in the long run. Technology centralizes power and information in a way that overwhelms mere humans. “You might think you’re manipulating the system,” she says, but in reality “you’re working really hard to keep up with a system that is constantly experimenting on you.”

    Compared to pricing algorithms, customer ratings of the type that worried my repairman should be fairly straightforward. Presumably it’s just a matter of gathering data and calculating an average. But online ratings are a questionable way to judge people even if the data they’re based on are pristine—and they probably aren’t. Academics have shown that customer ratings reflect racial biases. Complaints about a product or service can be interpreted as commentary about the person who provided it, rather than the service itself. And companies like Uber require drivers to maintain such high ratings that, in effect, any review that isn’t maximally ecstatic is a request for punitive measures.

    #Travail #Surveillance #Algorithme #Stress #Société_contrôle

  • Why Is Border Patrol Planning to DNA Test Asylum Seekers ? - Pacific Standard
    https://psmag.com/news/why-is-border-patrol-planning-to-dna-test-asylum-seekers

    Next week, at two locations along the United States’ southern border, authorities will begin swabbing the cheeks of migrants and asylum seekers traveling as families to complete DNA tests. The new pilot program, first reported by CNN, seeks to identify what the Department of Homeland Security calls “fraudulent families.” The DNA tests can provide results in as little as 90 minutes.

    For months, the Trump administration has claimed that some asylum seekers arriving on the border with children are not actually families, but rather adults traveling with unrelated children. The administration argues that these people hope to take advantage of laws that limit the amount of time children and families can remain in detention.

    Why Civil Rights Advocates Are Worried

    Some advocates are suspicious of the administration’s motives. In the past, immigration authorities have made fraud claims and separated legitimate parents from their children. Others say that the administration is exaggerating the scale of the problem: According to BuzzFeed News, immigration officials say they have identified 3,100 fraudulent families in the last year—but that represents less than 1 percent of the 256,821 family units apprehended. Some say the number should be even lower, because officials consider a family fraudulent if they believe that a child is not actually under 18 years old. (In the past, immigration authorities have been accused of ignoring evidence that people in adult detention facilities are actually minors.)

    Arguing that the DNA tests are unnecessary, the American Civil Liberties Union says the new plan represents another attempt by the administration to “intimidate and deter” asylum seekers. “Forced DNA collection is coercive and intrusive, and it raises serious privacy and civil liberties concerns,” Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, writes in a statement sent to Pacific Standard.

    Why Civil Rights Advocates Are Worried

    Some advocates are suspicious of the administration’s motives. In the past, immigration authorities have made fraud claims and separated legitimate parents from their children. Others say that the administration is exaggerating the scale of the problem: According to BuzzFeed News, immigration officials say they have identified 3,100 fraudulent families in the last year—but that represents less than 1 percent of the 256,821 family units apprehended. Some say the number should be even lower, because officials consider a family fraudulent if they believe that a child is not actually under 18 years old. (In the past, immigration authorities have been accused of ignoring evidence that people in adult detention facilities are actually minors.)

    Arguing that the DNA tests are unnecessary, the American Civil Liberties Union says the new plan represents another attempt by the administration to “intimidate and deter” asylum seekers. “Forced DNA collection is coercive and intrusive, and it raises serious privacy and civil liberties concerns,” Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, writes in a statement sent to Pacific Standard.

    #ADN #Vie_Privée #Société_contrôle #Données_personnelles

  • Honneur aux lycéens | Le Club de Mediapart
    https://blogs.mediapart.fr/laurence-de-cock/blog/081218/honneur-aux-lyceens

    Les images de l’arrestation de 150 jeunes à Mantes resteront. Elles sont indélébiles. C’est ce qu’ont souhaité ceux qui les ont sciemment filmées et diffusées. Grand bien leur en fasse ; ces images les salissent à jamais tout comme elles salissent ceux qui les ont soutenues et justifiées. Mais, plus grave, les faits eux resteront marqués dans la tête de ces enfants à genoux, mains sur la tête, humiliés comme jamais. Les faits laisseront des traces dont nous ne mesurons pas l’ampleur et les effets. En attendant Les lycéennes et lycéens en manifestation ont préféré détourner le symbole pour le renvoyer à la laideur des visages haineux de la République. On l’a dit souvent, la jeunesse est belle. Hier, ils ont crié leur fierté en reprenant la libre possession de leurs gestes : à genoux, mais debout, leur dignité jetée à la face de leurs bourreaux.

    Et pourtant nous sommes nombreuses et nombreux – et je crois même que nous sommes une grande majorité – à ne pas supporter que l’on court-circuite notre fragile tâche par des excès d’autoritarisme et de répression bien burnée. Nous ne le supportons pas car elle nous humilie à notre tour. Et la petite phrase du vidéaste amateur en témoigne : « Voilà une classe qui se tient sage, je pense qu’ils n’ont jamais vu ça, on va faire voir ça à leurs profs ». Ils ont donc aussi voulu s’adresser à nous, et nous faire prendre la mesure de leur efficacité à soumettre nos élèves comme des criminels, menottés, face au mur, sommés de regarder droit devant ». IIs ont cru sans doute que nous allions applaudir, remercier, fondre en larmes de gratitude. Mais c’est toute une profession qu’ils ont piétinée. Nous n’oublierons pas non plus.

    Maintenant c’est l’ensemble de la jeunesse qui est en ligne de mire, et ce depuis plusieurs années. Peut-être depuis ce jour, sous le gouvernement Valls, où des lycéens de 15 ans ont été violemment frappés devant leur lycée (Bergson) au début du mouvement contre la loi travail. Aujourd’hui donc ce sont nous, les enseignants qui croyons en l’intelligence de nos élèves et qui les formons quotidiennement aux valeurs de la démocratie, qui sommes montrés du doigt comme les nouveaux cracheurs de feu de la République. Il faudrait dissuader les jeunes de s’organiser, de manifester. Mais depuis quand sommes- nous devenus les gardiens d’une démocratie qui ne serait que des incantations vides ? Peut-être depuis que l’institution a décidé de pilonner elle-même tous les espaces d’une possible conscientisation politique de la jeunesse : à commencer par des salles dans les lycées pour accueillir les AG ; mais en passant aussi par la suppression de programmes soucieux de pluralisme et des apports des sciences sociales : en Sciences économiques et sociales, en histoire et géographie par exemple.

    #Education #Société_contrôle

  • Microsoft sounds an alarm over facial recognition technology - The Verge
    https://www.theverge.com/2018/12/7/18129858/microsoft-facial-recognition-ai-now-google

    AI Now is a group affiliated with New York University that counts as its members employees of tech companies including Google and Microsoft. In a new paper published Thursday, the group calls on governments to regulate the use of artificial intelligence and facial recognition technologies before they can undermine basic civil liberties. The authors write:

    Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.

    The AI Now researchers are particularly concerned about what’s called “affect recognition” — and attempt to identify people’s emotions, and possibly manipulate them, using machine learning.

    #Reconnaissance_faciale #Société_numérique #Société_controle #Dystopie

  • L’application Telegram bloquée en Iran pour « maintenir l’ordre »
    http://abonnes.lemonde.fr/pixels/article/2018/01/02/l-application-telegram-bloquee-en-iran-pour-maintenir-l-ordre_523683

    Le 30 décembre 2017, le ministre iranien des télécommunications, Mohammad Javad Azari Jahromi, avait invectivé sur Twitter (un réseau social pourtant banni depuis des années de la République islamique, tout comme Facebook) le fondateur russe de Telegram, Pavel Durov :

    « Une chaîne Telegram encourage des comportements haineux, l’usage de cocktails Molotov, le soulèvement armé et l’agitation sociale. Il est temps MAINTENANT de mettre fin à ces encouragements sur Telegram. »

    Pavel Durov avait répondu dans la foulée : « Les appels à la violence sont interdits par les règles de Telegram (…) Si cela est confirmé, nous devrons bloquer cette chaîne, quelles que soient son audience et son affiliation politique. »

    Si l’application s’était montrée très permissive dans ses premières années d’existence, elle a fini par introduire quelques règles de modération après les attentats commis en région parisienne en 2015 (Telegram ayant été prisée par l’organisation Etat islamique) : les utilisateurs n’ont normalement plus le droit de partager des appels à la violence, de la pornographie et des fichiers portant atteinte au droit d’auteur sur les chaînes publiques.

    Deux heures plus tard, le fondateur de Telegram annonçait avoir effectivement suspendu la chaîne Amadnews, car elle avait « poussé ses abonnés à utiliser des cocktails Molotov contre la police » ; ce qui contrevient au règlement de Telegram.

    Si M. Durov a, à plusieurs reprises, assuré que « Telegram n’a aucun accord avec aucun gouvernement de cette planète », Téhéran a plusieurs fois annoncé l’inverse, expliquant que l’application s’était engagée à supprimer les contenus antireligieux et les comptes signalés par le gouvernement. L’Iran avait aussi évoqué l’existence d’un outil de « filtrage intelligent » permettant de détecter certains types de contenus. Ce que Telegram a toujours nié.

    Plus largement, l’application est depuis longtemps critiquée par des activistes spécialistes de la cybersécurité, tel le lanceur d’alerte américain Edward Snowden. Ce dernier avait révélé en 2013 l’ampleur de la surveillance exercée par les Etats-Unis. Comme d’autres, il reproche à Telegram de se présenter comme une application sécurisée, alors que seules certaines parties (les chats « secrets », une option à activer) le sont – ce qui pourrait laisser penser à ses utilisateurs qu’ils sont, à tort, en sécurité. De nombreux experts en cryptographie estiment aussi que le protocole de chiffrement utilisé par Telegram n’est pas aussi abouti que celui des technologies concurrentes.

    Après la fermeture, samedi, de la chaîne Amadnews, Edward Snowden a publié une série de tweets pour critiquer Telegram :

    « Vous ne pouvez pas empêcher un service indépendant et déstabilisant d’être bloqué dans des régimes autoritaires. Vous ne pouvez que le retarder. Vous devez donc penser à comment faire pour protéger les gens en faisant en sorte que ce service reste accessible “même après le blocage”. Et c’est à ce sujet que je suis vraiment inquiet. »

    Edward Snowden critique notamment la structure centralisée de l’application et le discours tenu, selon lui, par Telegram, pour rassurer ses utilisateurs qui consiste à dire : « Faites-nous confiance dans le fait que nous ne donnerons pas vos données, que nous ne lirons pas vos messages, que nous ne fermerons pas vos chaînes. »

    « Peut-être que Durov est un ange, poursuit Edward Snowden. Je l’espère ! Mais des anges sont déjà tombés. » Pour lui, « Telegram aurait dû travailler depuis des années pour faire en sorte que les chaînes soient décentralisées – c’est-à-dire hors de son contrôle. »

    Attendez que je me rappelle... ce n’étaient pas les Ministres de l’Intérieur des pays de l’Union européenne qui voulaient interdire Telegram car cryptée en cette douce année 2017 ?

    #Telegram #Société_contrôle #censure #Iran