#zeynep_tufekci

  • Zeynep Tufekci : Get a red team to ensure AI is ethical | Verdict
    https://www.verdict.co.uk/zeynep-tufekci-ai-red-team

    In cybersecurity, red team professionals are tasked with finding vulnerabilities before they become a problem. In artificial intelligence, flaws such as bias often become apparent only once they are deployed.

    One way to catch these AI flaws early is for organisations to apply the red team concept when developing new systems, according to techno-sociologist and academic Zeynep Tufekci.

    “Get a read team, get people in the room, wherever you’re working, who think about what could go wrong,” she said, speaking at Hitachi Vantara’s Next conference in Las Vegas, US, last week. “Because thinking about what could go wrong before it does is the best way to make sure it doesn’t go wrong.”

    Referencing Hitachi CEO and president Toshiaki Higashihara description of digitalisation as having “lights and shadows”, Tufekci warned of the risks associated with letting the shadowy side go unchecked.
    AI shadows

    One of these “shadows” is when complex AI systems become black boxes, making it difficult even for the AI’s creators to explain how it made its decision.

    Tufekci also cited the example of YouTube’s recommendation algorithm pushing people towards extremism. For example, a teenager could innocently search ‘is there a male feminism’ and then be nudged towards misogynistic videos because such controversial videos have received more engagement.

    And while data can be used for good, it can also be used by authoritarian governments to repress its citizens, or by election consultancies to manipulate our votes.

    Then there are the many instances of human bias finding their way into algorithms. These include AI in recruitment reflecting the sexism of human employers or facial recognition not working for people with darker skin.

    “If the data can be used to fire you, or to figure out protesters or to use for social control, or not hire people prone to depression, people are going to be like: ‘we do not want this’,” said Tufekci, who is an associate professor at the UNC School of Information and Library Science.

    “What would be much better is to say, what are the guidelines?”
    Using a red team to enforce AI ethics guidelines

    Some guidelines already exist. In April 2018, the European Union’s High-Level Expert Group on AI presented seven key requirements for trustworthy AI.

    These requirements include human oversight, accountability and technical robustness and safety. But what Tufekci suggests is having a team of people dedicated to ensuring AI ethics are adhered to.
    3 Things That Will Change the World Today
    Get the Verdict morning email

    “You need people in the room, who are going to say there’s light and there are shadows in this technology, and how do we figure out to bring more light into the shadowy side, so that we’re not blindsided, so that we’re not just sort of shocked by the ethical challenges when they hit us,” she explained.

    “So we think about it ahead of time.”

    However, technology companies often push back against regulation, usually warning that too much will stifle innovation.

    “Very often when a technology is this new, and this powerful, and this promising, the people who keep talking about what could go wrong – which is what I do a lot – are seen as these spoilsport people,” said Tufekci.

    “And I’m kind of like no – it’s because we want it to be better.”

    #Intelligence_artificielle #Zeynep_Tufekci #Cybersécurité #Biais #Big_data

  • Recension de « Twitter & les gaz lacrymogènes » par Stéphane Bortzmeyer
    https://www.bortzmeyer.org/twitter-gaz-lacrymos.html

    Beaucoup de textes ont été écrits sur le rôle de l’Internet, et des réseaux sociaux centralisés, comme Facebook ou Twitter, dans des évènements politiques. Ce fut le cas, par exemple, du printemps arabe. L’auteure explore, dans ce livre très riche et très rigoureux, tous les aspects de cette relation entre les militants et les techniques d’information et de communication. Twitter peut-il battre les gaz lacrymogènes ?

    Une des raisons pour lesquelles bien des discours sur les mouvements politiques utilisant l’Internet sont très unilatéraux est que beaucoup de leurs auteurs sont des férus de technique qui ne connaissent pas grand’chose à la politique, et qui découvrent comme s’ils étaient les premiers à militer, ou bien ils sont des connaisseurs de la politique, mais complètement ignorants de la technique, dont il font un tout, animé d’une volonté propre (les fameux « algorithmes »), et pas des outils que les gens vont utiliser. L’auteure, au contraire, informaticienne, puis chercheuse en sciences politiques, connait bien les deux aspects. Elle a étudié en profondeur de nombreux mouvements, les zapatistes au Mexique, Occupy Wall Street, l’occupation du parc Gezi, Black Lives Matter, les révolutions tunisienne et égyptienne, en étant souvent sur le terrain, à respirer les gaz lacrymogènes. (Les gilets jaunes n’y sont pas, bien que ce mouvement mériterait certainement d’être étudié dans son rapport à Facebook, mais le livre a été publié avant.) Et elle analyse le rôle de l’Internet, en chercheuse qui le connait bien, en voit les forces et les limites.

    Parmi les affordances de l’Internet, il y a le fait que beaucoup de choses sont possibles sans organisation formelle. Des mouvements très forts (comme celui du parc Gezi) ont été possibles sans qu’un parti traditionnel ne les structure et ne les dirige. Mais, bien sûr, cet avantage a aussi une face négative : puisque la nécessité d’une organisation n’est pas évidente, on peut se dire qu’on peut s’en passer. Au début, ça se passe effectivement bien, sans les lourdeurs bureaucratiques exaspérantes. Mais, ensuite, les problèmes surgissent : le pouvoir en place fait des ouvertures. Comment y répondre ? Ou bien il change de tactique, et le mouvement doit s’adapter. Et, là, l’absence d’un mécanisme de prise de décision commun se fait sentir, et beaucoup de mouvements s’affaiblissent alors, permettant à la répression de disperser ce qui reste.

    Léger reproche à l’auteure : elle ne discute pas ce qui pourrait arriver avec d’autres outils que les gros réseaux centralisés étatsuniens comme Facebook ou Twitter. Il est vrai qu’on manque encore d’exemples détaillés à utiliser, il n’y a pas encore eu de révolution déclenchée sur le fédivers ou via Matrix.

    Je n’ai donné qu’une idée très limitée de ce livre. Il est très riche, très nuancé, l’auteure a vraiment tenu à étudier tout en détail, et aucun résumé ne peut donc suffire. En conclusion, un livre que je recommande à toutes celles et tous ceux qui veulent changer le monde et se demandent comment faire. Il n’est ni optimiste, ni pessimiste sur le rôle de l’Internet dans les révolutions : « ni rire, ni pleurer, mais comprendre »

    #Zeynep_Tufekci #C&F_éditions #Stéphane_Bortzmeyer

  • Tufekci Joins The Atlantic As Contributing Writer - The Atlantic
    https://www.theatlantic.com/press-releases/archive/2019/09/tufekci-joins-atlantic-contributing-writer/598678

    Zeynep Tufekci is joining The Atlantic as a contributing writer, editor in chief Jeffrey Goldberg announced today. In this role, Tufekci will write regularly for The Atlantic about the intersection of technology, politics, and society.

    “Zeynep has an uncanny ability, through clear writing and clear thinking, to make the incomprehensible understandable, and to spot trends before most anyone else,” Goldberg said.

    Tufekci will appear at The Atlantic Festival tomorrow in Washington, D.C., in conversation with Goldberg. She will expand on the topics explored in her book, Twitter and Teargas: The Ecstatic, Fragile Politics of Networked Protest in the 21st Century, where she examined the possibilities and perils of modern protest movements that are increasingly rooted in online media.

    Tufekci has been a contributing opinion writer at The New York Times and a columnist for Wired and Scientific American. She is currently an associate professor at the University of North Carolina, Chapel Hill’s School of Information and Library Science and a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University. Throughout her research and academic work, Tufekci has studied the convergence of social change, machine intelligence, privacy, and surveillance.

    #Zeynep_Tufekci #The_Atlantic

  • Les technologies numériques, atout ou handicap des luttes ?
    https://reporterre.net/Les-technologies-numeriques-atout-ou-handicap-des-luttes

    Dans « Twitter & les gaz lacrymogènes », Zeynep Tufekci s’intéresse à la place des réseaux numériques dans les mobilisations politiques : une puissance indéniable mais fragile du fait des monopoles de l’économie du web.

    Présentation du livre par son éditeur :

    Les mouvements sociaux à travers le monde utilisent massivement les technologies numériques. Zeynep Tufekci était présente sur la place Tahrir et en Tunisie lors des printemps arabes, à Istanbul pour la défense du parc Gezi, dans les rues de New York avec Occupy et à Hong-Kong lors du mouvement des parapluies. Elle y a observé les usages des téléphones mobiles et des médias sociaux et nous en propose ici un récit captivant.

    Les réseaux numériques permettent de porter témoignage et d’accélérer les mobilisations. Ils aident les mouvements à focaliser les regards sur leurs revendications. Cependant, l’espace public numérique dépend des monopoles de l’économie du web. Leurs algorithmes, choisis pour des raisons économiques, peuvent alors affaiblir l’écho des contestations. Au delà de leur puissance pour mobiliser et réagir, faire reposer la construction des mouvements sur ces technologies fragilise les organisations quand il s’agit de les pérenniser, quand il faut négocier ou changer d’objectif tactique.

    De leur côté, les pouvoirs en place ont appris à utiliser les médias numériques pour créer de la confusion, de la désinformation, pour faire diversion, et pour démobiliser les activistes, produisant ainsi résignation, cynisme et sentiment d’impuissance. Une situation qui montre que les luttes sociales doivent dorénavant intégrer dans leur stratégie les enjeux de l’information et de la communication aux côtés de leurs objectifs spécifiques.

    Zeynep Tufekci est professeure à l’Université de Caroline du Nord (États-Unis). Née en Turquie, elle a débuté comme développeuse informatique avant de s’intéresser aux sciences humaines et sociales. Elle se définit dorénavant comme une « techno-sociologue ». Chroniqueuse régulière pour The Atlantic et The New York Times, ses interventions lors des conférences TED sont largement diffusées et montrent sa capacité à captiver un public en soulevant des questions essentielles sur les usages des médias sociaux.

    Twitter & les gaz lacrymogènes. Forces et fragilités de la contestation connectée, de Zeynep Tufekci, C&F éditions, septembre 2019, 430 p., 29 €.

    #C&F_éditions #Zeynep_Tufekci

  • It’s the (Democracy-Poisoning) Golden Age of Free Speech | WIRED
    https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship

    Par Zeynep Tufekci

    In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

    And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

    Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

    Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

    These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

    So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

    For Facebook, YouTube, and Twitter, all speech —whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors— is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

    What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

    The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

    This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

    The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

    By this point, we’ve already seen enough to recognize that the core business model underlying the Big Tech platforms—harvesting attention with a massive surveillance infrastructure to allow for targeted, mostly automated advertising at very large scale—is far too compatible with authoritarianism, propaganda, misinformation, and polarization.

    #Zeynep_Tufekci #Médias_sociaux #Liberté_expression #Espace_public #Désinformation #Attention

  • Hong Kong protesters challenge surveillance with apps and umbrellas
    https://www.pri.org/stories/2019-08-14/hong-kong-protesters-challenge-surveillance-apps-and-umbrellas

    Interview de Zeynep Tufekci depuis Hong Kong

    China said on Wednesday Hong Kong’s protest movement had reached “near terrorism,” as more street clashes followed ugly scenes a day earlier at the airport where demonstrators set upon two men they suspected of being government sympathizers.

    By nightfall, police and protesters were again clashing on the streets, with riot officers shooting tear gas almost immediately as their response to demonstrators toughened.

    Related: Hong Kong pushes bill allowing extraditions to China despite biggest protest since handover

    Flights resumed at Hong Kong airport, which is one of the world’s busiest, after two days of disruptions. Thousands of protesters have occupied the airport for days, forcing the cancellation of hundreds of departures on Monday and Tuesday.

    Ten weeks of increasingly violent confrontation between police and protesters have plunged the city into its worst crisis since it reverted from British to Chinese rule in 1997.

    Related: I am from Hong Kong, not China

    “We’re deeply sorry about what happened yesterday,” read a banner held up by a group of a few dozen demonstrators in the airport arrivals hall in the morning. “We were desperate and we made imperfect decisions. Please accept our apologies,” the banner said.

    They also showed little sign of relenting in their protests, which began in opposition to a now-suspended bill that would have allowed the extradition of suspects for trial in mainland China, but have swelled into wider calls for democracy.

    Zeynep Tufekci, a professor who researches the intersection of technology and society at the University of North Carolina at Chapel Hill, witnessed some of the violence in Hong Kong and spoke with The World’s Marco Werman about the future of the protest movement, its pitfalls, and role of social media in their quest for democracy.

    Marco Werman: Is what you saw suggesting emerging divisions in the ranks of the protesters?

    Zeynep Tufekci: I don’t think so — if anything, today they’ve issued an apology. They had a lot of discussions in their forums ... This is a bunch of very young people. Most of them haven’t slept [for five days] — they’re at the airport trying to get heard. It feels like a hopeless thing; it was a very high stress situation, and today, there’s a lot of soul searching. As in, how did we get to this point?

    Is there also fear?

    There is a lot of fear because they feel a lot of the ones I talk to, feel like the world doesn’t care about them — China’s too important. And a bunch of young people trying to keep democratic rights or demands feel like they’re being treated like a nuisance by the world. So they feel — from what I can see — that it’s quite futile, but they’re just going to go down the best they can and try.

    You study how social media is used politically, even in revolutionary movements. Now the Chinese mainland is hailing the two men who were beat up at the airport as heroes. China is talking about acts of terror in Hong Kong. How does this face-off look on social media?

    Obviously what happened was terrible, but what I’m seeing is a lot of efforts that are out to discredit the whole demand — the demands being quite reasonable.

    The demands of the Hong Kong protesters?

    There’s universal suffrage, independent inquiry, and releasing arrested protesters — you know, super basic demands, and now there’s a huge social media campaign to use that incident to portray the whole movement as just “terrorists” and violent. So ... the information sphere is where the battle is. Because if you’re just turning on TV and you’re in mainland China and you just see this — it’s [just] a couple of clips from an hourlong incident, and you don’t really get the rest of the context ... because that’s censored — you might feel justified by the harsh steps that China mainland might decide to take. And also the world might feel like, ’Oh, look, this is just another violent movement,’ which isn’t a correct characterization.

    How were the Hong Kong protesters using social media themselves in these demonstrations? Do you think it’s unique? Or does it pick up threads from other movements that you followed?

    It’s pretty amazing what’s going on in Hong Kong in terms of the use of the internet. Unlike the Arab Spring countries that I’ve also studied, where there was some very efficient use of digital technologies, Hong Kong is a super high-tech place. The internet works very well and very fast, everybody’s digitally literate and otherwise literate, [there is a] very high education [level]. They have lots of phones and they’re using forums. They have their own forums and they’re using Telegram as a place where they hold a lot of these discussions. In fact, apologies for yesterday’s incident came out of these forums there — the soul searching, and ’we shouldn’t ever let this happen again’ came out of these forums. So I don’t really know how it will play out, but everybody’s so digitally adept. The way they swarm and make decisions online and the way they swarm offline and go from place to place is a very 21st-century movement. Ironically, they’re making demands for things a lot of people [wanted] in the 19th century — just universal suffrage. So there is a very stark contrast with the high-tech high functioning society, which doesn’t even have some of the basic rights that are almost universal in many countries.

    Well, mainland China is also digitally adept: it’s at the forefront of surveillance technology. I’m curious just how protesters in Hong Kong are dealing with that?

    There are a couple of things that they’re dealing with in terms of surveillance technology: Obviously, the phones are tracked and they’re aware of it. Some of them have multiple phones and some of them turn off their phones. They wear a lot of masks and [they use] umbrellas to block CCTV when they get off at a subway station: The first one jumps out and opens umbrellas covering all the visibly closed circuit CCTVs. And then when they want to make decisions, they open their umbrellas and huddle under it. It’s not fully protective against surveillance, but just like many other ... surveilled [people], when I asked them if they were worried about surveillance, they usually say, “There’s so many of us. There’s hundreds of thousands of us in the streets.” And that’s probably their only real protection. I don’t think they can [remain undetected] even with their umbrellas and cool tactics. I don’t think they can avoid the surveillance, so they’re just counting on the fact that they probably cannot jail that many of them all at once.

    Do you think that’s a fair piece of insurance?

    As long as they’re protected by the society, probably. But if there’s some sort of compromise in which the Hongkongers — the well-off middle classes and the financial centers — decide that they’re just going to give in to authoritarianism in return for just having business as usual ... it might not be. And a lot of the ones I talked to, they don’t have, of course, any memory of Tiananmen, they’re just way too young for that. But they heard from their elders about how people fled to Hong Kong after that. And elsewhere too. And a lot of them are like, ’Is there a place in this country for us? Is there a place in the world for us?’ Because, as I said, they’re highly educated — they’re literate, and some are wondering if they should even try to save their own country if it’s going to be futile, and [if they] should just try to emigrate. So I think that’s sort of the reality — they cannot truly escape the surveillance that mainland China can bring to this city. And they also probably cannot escape the consequences. So here we are.

    #Zeynep_Tufekci #Hong_Kong

  • “Le scandale Cambridge Analytica n’est pas une faille technique mais un problème politique” - Idées - Télérama.fr
    http://www.telerama.fr/idees/le-scandale-cambridge-analytica-nest-pas-une-faille-technique-mais-un-probl

    Interview de Zeynep Tufekci

    Cet incident, écriviez-vous dans le New York Times, est « une conséquence naturelle du modèle économique de Facebook »…

    Facebook comme Google attirent l’essentiel de l’attention car ce sont des mastodontes, mais n’oublions pas que tout le modèle de la publicité en ligne – et celui d’une majorité de médias – repose sur les mêmes fondations. Partout, le principe est identique : où que vous cliquiez, vous êtes suivi, ciblé, monétisé et vendu au plus offrant. Les pages que vous consultez, les contenus que vous publiez, toutes vos traces numériques sont utilisées à des fins commerciales. Qu’il s’agisse de Cambridge Analytica, d’un dictateur en herbe ou d’une marque d’aspirateurs importe peu, puisque c’est un système totalement asymétrique dans lequel vous ne connaissez pas l’identité des passeurs d’ordre. C’est le problème majeur d’Internet aujourd’hui. Dans cette « économie de l’attention », Facebook peut compter sur une infrastructure sans équivalent. Grâce à elle, la plateforme peut toucher deux milliards d’utilisateurs, écran par écran, sans même qu’ils s’en rendent compte.

    Faut-il craindre la multiplication d’épisodes de ce genre ?

    De toute évidence. Il est mécaniquement impossible de prédire l’utilisation qui sera faite de nos données dans les années à venir. C’est un puits sans fond ! Même si vous n’êtes pas sur Facebook, une quantité gigantesque d’informations à votre sujet circulent et permettent de vous profiler. Grâce aux progrès de l’intelligence artificielle, des algorithmes sont capables d’analyser vos amitiés, votre activité, vos modes de consommation. Nous figurons probablement tous dans des bases de données commerciales dont nous ignorons l’existence, mises en relation et croisées avec d’autres bases de données que nous ne connaissons pas davantage. Dans le cas de Cambridge Analytica, l’immense majorité des personnes siphonnées ignoraient tout de ce qui était en train de se passer.

    Réagissons-nous si tardivement à cause de cette opacité ?

    Pour une personne ordinaire, il est extrêmement difficile de réagir, car cette collecte est invisible, inodore et incolore. En tant qu’internaute, vous ne voyez rien d’autre que les contenus qui s’affichent sur votre écran.

    A ce titre, que pensez-vous de la réaction de Mark Zuckerberg ?

    Il s’est mollement excusé parce qu’il n’avait pas le choix. Mais il s’est quand même posé en victime, comme s’il avait été dupé par un tiers renégat ne respectant pas les règles d’un jeu qu’il a lui-même créé. Je pense que nous ne devrions croire aucune entreprise sur parole. Nous avons besoin de contrôle et de mécanismes de protection. Prenons l’exemple des voitures. Elles peuvent avoir des accidents ou présenter des risques pour l’environnement. Pour lutter contre ces facteurs négatifs, les gouvernements ont imposé des limitations de vitesse, le port de la ceinture de sécurité ou des normes environnementales. Ces changements ne sont pas intervenus par l’opération du Saint-Esprit : il a fallu les imposer. Et quand une entreprise ne respecte pas ces règles, elle est sanctionnée. L’économie liée à l’exploitation des données est encore un Far West à civiliser.

    Ces dernières semaines, les appels à la déconnexion de Facebook se sont multipliés. Est-ce une option viable ?

    Ça ne peut être qu’une décision individuelle. C’est le droit le plus strict de chacun, mais c’est un luxe qui ne résoudra pas le problème : dans de nombreux pays, Facebook est le seul moyen pour communiquer avec sa famille ou ses amis, et c’est un vecteur important d’organisation sociale. Il vaudrait mieux réfléchir au démantèlement de Facebook tout en réfléchissant à ses conséquences possibles : si nous ne réformons pas en profondeur le modèle économique du Web, des légions de petits Facebook pourraient en effet se montrer encore plus nocifs qu’une plateforme centralisée…

    #Zeynep_Tufekci #Facebook #Cambridge_analytica #Vie_privée #Données_personnelles

  • Facebook’s Surveillance Machine - The New York Times
    https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html

    par Zeynep Tufekci

    Mr. Grewal is right: This wasn’t a breach in the technical sense. It is something even more troubling: an all-too-natural consequence of Facebook’s business model, which involves having people go to the site for social interaction, only to be quietly subjected to an enormous level of surveillance. The results of that surveillance are used to fuel a sophisticated and opaque system for narrowly targeting advertisements and other wares to Facebook’s users.

    Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.

    Facebook doesn’t just record every click and “like” on the site. It also collects browsing histories. It also purchases “external” data like financial information about users (though European nations have some regulations that block some of this). Facebook recently announced its intent to merge “offline” data — things you do in the physical world, such as making purchases in a brick-and-mortar store — with its vast online databases.

    Facebook even creates “shadow profiles” of nonusers. That is, even if you are not on Facebook, the company may well have compiled a profile of you, inferred from data provided by your friends or from other data. This is an involuntary dossier from which you cannot opt out in the United States.

    Qu’est-ce qu’un consentement éclairé dans la situation actuelle ?

    This wasn’t informed consent. This was the exploitation of user data and user trust.

    Let’s assume, for the sake of argument, that you had explicitly consented to turn over your Facebook data to another company. Do you keep up with the latest academic research on computational inference? Did you know that algorithms now do a pretty good job of inferring a person’s personality traits, sexual orientation, political views, mental health status, substance abuse history and more just from his or her Facebook “likes” — and that there are new applications of this data being discovered every day?

    Given this confusing and rapidly changing state of affairs about what the data may reveal and how it may be used, consent to ongoing and extensive data collection can be neither fully informed nor truly consensual — especially since it is practically irrevocable.

    A business model based on vast data surveillance and charging clients to opaquely target users based on this kind of extensive profiling will inevitably be misused. The real problem is that billions of dollars are being made at the expense of the health of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.

    #Surveillance #Facebook #Zeynep_Tufekci #Consentement_éclairé

  • YouTube, the Great Radicalizer - The New York Times
    https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

    Par Zeynep Tufekci

    It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

    This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

    What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

    Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

    It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

    Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

    YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

    What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

    #Zeynep_Tufekci #Google #YouTube #Radicalisation #Pouvoir_algorithmes #Politique_algorithmes