• What OpenAI shares with Scientology - by Henry Farrell
    https://www.programmablemutter.com/p/look-at-scientology-to-understand

    When Sam Altman was ousted as CEO of OpenAI, some hinted that lurid depravities lay behind his downfall. Surely, OpenAI’s board wouldn’t have toppled him if there weren’t some sordid story about to hit the headlines? But the reporting all seems to be saying that it was God, not Sex, that lay behind Altman’s downfall. And Money, that third great driver of human behavior, seems to have driven his attempted return and his new job at Microsoft, which is OpenAI’s biggest investor by far.

    The drama at OpenAI and the firing of Sam Altman is the first skirmish in a war.
    https://slate.com/technology/2023/11/openai-sam-altman-ai-microsoft-eacc-effective-altruism.html

    The confounding saga of Sam Altman’s sudden, shocking expulsion from OpenAI on Friday, followed by last-ditch attempts from investors and loyalists to reinstate him over the weekend, appears to have ended right where it started: with Altman and former OpenAI co-founder/president/board member Greg Brockman out for good. But there’s a twist: Microsoft, which has been OpenAI’s cash-and-infrastructure backer for years, announced early Monday morning that it was hiring Altman and Brockman “​​to lead a new advanced AI research team.” In a follow-up tweet, Microsoft CEO Satya Nadella declared that Altman would become chief executive of this team, which would take the shape of an “independent” entity within Microsoft, operating something like company subsidiaries GitHub and LinkedIn. Notably, per Brockman, this new entity will be led by himself, Altman, and the first three employees who’d quit OpenAI Friday night in protest of how those two had been treated.

  • The Capitol siege and facial recognition technology.
    https://slate.com/technology/2021/01/facial-recognition-technology-capitol-siege.html

    In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. (He claimed to Farrow that he merely picked them up off the floor and forgot about them. Brock was arrested Sunday and (...)

    #Clearview #algorithme #CCTV #biométrie #technologisme #facial #reconnaissance #vidéo-surveillance #extrême-droite #surveillance #voix (...)

    ##AINow

  • Facebook is fighting biometric facial recognition privacy laws.
    https://slate.com/technology/2017/08/facebook-is-fighting-biometric-facial-recognition-privacy-laws.html

    There’s a court case in Illinois that challenges Facebook’s collection of biometric data without users’ permission, and the social media giant is fighting tooth and nail to defend itself. Carlos Licata, one of the plaintiffs on the case, sued Facebook in 2015 under a unique Illinois law, the Biometric Information Privacy Act, which says that no private company can collect or store a person’s biometric information without prior notification and consent. If companies do collect data without (...)

    #CBP #Facebook #algorithme #biométrie #consentement #émotions #facial #législation #reconnaissance #lobbying #publicité #surveillance (...)

    ##publicité ##_

  • Big Data has allowed ICE to dramatically expand its deportation efforts.
    https://slate.com/technology/2020/09/palantir-ice-deportation-immigrant-surveillance-big-data.html

    A New Mexico man gets a call from federal child welfare officials. His teenage brother has arrived alone at the border after traveling 2,000 miles to escape a violent uncle in Guatemala. The officials ask him to take custody of the boy. He hesitates ; he is himself undocumented. The officials say not to worry. He agrees and gives the officials his information. Seven months later, ICE agents arrest him at his house and start deportation proceedings. A family in suburban Maryland gets a (...)

    #Palantir #CBP #ICE #algorithme #biométrie #migration #facial #reconnaissance #BigData #conducteur·trice·s #empreintes (...)

    ##surveillance

  • Congress must decide whether to renew a key part of the USA Freedom Act.
    https://slate.com/technology/2020/01/usa-freedom-act-renewal-section-215-cdr.html

    It’s highly intrusive and ineffective—but some insist Congress should reauthorize it anyway. Remember the Snowden disclosures ? It may seem like an eternity ago, but it was in 2013 that Edward Snowden revealed to the public the government’s extensive warrantless domestic surveillance program. After he disclosed that the National Security Agency was scooping up millions of phone records showing Americans’ calling patterns, Congress responded appropriately by ending that bulk collection program (...)

    #NSA #législation #Patriot_Act #surveillance

  • It Looks Like the Trump Campaign’s App Will Track Users’ Locations. Is That Normal ?
    https://slate.com/technology/2019/12/trump-2020-app-phunware-ads-data-tracking.html

    In 2016, the Donald Trump campaign released an app called America First, which had about 120,000 registered users. Created by uCampaign, the app functioned both as a social network for Trump supporters and a tool for collecting data stored in a phone’s address book—such as the names, emails, and home addresses of both users and their saved contacts. For 2020, the Trump campaign again plans to offer an app to the president’s supporters, and it will likely collect some of their personal data. (...)

    #AmericanMadeMediaConsultants #CambridgeAnalytica #algorithme #smartphone #géolocalisation #élections #data #écoutes #profiling #électeurs #publicité (...)

    ##publicité ##Phunware

  • The techlash has come to Stanford.
    https://slate.com/technology/2019/08/stanford-tech-students-backlash-google-facebook-palantir.html

    Palantir is about a 15-minute walk from Stanford University. That stone’s-throw convenience helped one morning in June when a group of Stanford students perched on the third story of a parking garage across the street from the data-analytics company’s entrance and unfurled a banner to greet employees as they walked into work: “OUR SOFTWARE IS SO POWERFUL IT SEPARATES FAMILIES.”

    The students were protesting Palantir software that U.S. Immigration and Customs Enforcement uses to log information on asylum-seekers, helping the agency make arrests of undocumented family members who come to pick them up. The activists are members of a campus group called SLAP—Students for the Liberation of All People—that was founded by Stanford freshmen the winter after Donald Trump was elected president. At first, the group focused on concerns shared by leftist activists around the country: On the day of Trump’s inauguration, for example, members blocked the doors of a Wells Fargo near campus to protest the bank’s funding of the Dakota Access Pipeline and its history of racist lending practices. These days, though, SLAP has turned its attention to the industry in its backyard: Big Tech.

    This might all sound like standard campus activism. But many of SLAP’s peers don’t see the group—and another, softer-edged student organization called CS+Social Good—as marginal or a nuisance. Even computer science students whom I interviewed told me they were grateful SLAP is making noise about Silicon Valley, and that their concerns reflect a growing campus skepticism of the technology industry, even among students training to join it.

    Many of the computer science students at Stanford I talked to oscillated as they described how they feel about companies like Facebook, Microsoft, Amazon, and Google. Some told me they would never work for one of these companies. Others would but hope to push for change from within. Some students don’t care at all, but even the ones who would never think twice about taking a job at Facebook aren’t blind to how the company is perceived. “It probably varies person to person, but I’m at least hopeful that more of the Stanford CS community is thoughtful and critical of the morality of choosing a place to work these days, rather than just chasing prestige,” Neel Rao, a computer science undergrad at Stanford, told me in an online chat. “And that a lot of this is due to increasing coverage of major tech scandals, and its effect on mainstream public sentiment and distrust.”

    But unlike Computer Professionals for Social Responsibility—and in contrast with the current direct-action approach of SLAP—CS+Social Good is primarily focused on changing computer science higher education from the inside. The organization has worked with the university to create new electives in Stanford’s CS department, like “A.I. for Social Good” and studio classes that allow students to partner with nonprofits on tech projects and get credits. And CS+Social Good has expanded to other campuses too—there are now more than a dozen chapters at campuses across the country. At Stanford, CS+Social Good counts more than 70 core members, though well over 1,000 students have attended its events or are enrolled in the classes it’s helped design.

    #Techlash #Stanford #Ethique #Informatique

  • The “Drunk Pelosi” video shows that cheapfakes can be as damaging as deepfakes.
    https://slate.com/technology/2019/06/drunk-pelosi-deepfakes-cheapfakes-artificial-intelligence-disinformation.html

    The A.I.-generated “deepfake” video implicitly but unmistakably calls for Facebook to make a public statement on its content moderation polices. The platform has long been criticized for permitting the spread of disinformation and harassment, but it became particularly acute recently, when the company said that it would not remove the “Drunk Pelosi” video.

    On Thursday, the House Permanent Select Committee on Intelligence will hold an open hearing on A.I. and the potential threat of deepfake technology to Americans. Many technology researchers believe that deepfakes—realistic-looking content developed using machine learning algorithms—will herald a new era of information warfare. But as the “Drunk Pelosi” video shows, slight edits of original videos may be even more difficult to detect and debunk, creating a cascade of benefits for those willing to use these digital dirty tricks.

    The video, posted to a self-described news Facebook page with a fan base of about 35,000, depicted Nancy Pelosi slurring her words and sounding intoxicated. However, when compared with another video from the same event, it was clear even to nonexperts that it had been slowed down to produce the “drunken” effect. Call it a “cheapfake”—it was modified only very slightly. While the altered video garnered some significant views on Facebook, it was only after it was amplified by President Donald Trump and other prominent Republicans on Twitter that it became a newsworthy issue. The heightened drama surrounding this video raises interesting questions not only about platform accountability but also about how to spot disinformation in the wild.

    “Cheapfakes” rely on free software that allows manipulation through easy conventional editing techniques like speeding, slowing, and cutting, as well as nontechnical manipulations like restaging or recontextualizing existing footage that are already causing problems. Cheapfakes already call into question the methods of evidence that scientists, courts, and newsrooms traditionally use to call for accountability

    Many will never know the video was a fake, but the advantages it gave to pundits will echo into the future. It’s a recent example of what legal theorists Bobby Chesney and Danielle Citron call the liar’s dividend . Those wishing to deny the truth can create disinformation to support their lie, while those caught behaving badly can write off the evidence of bad behavior as disinformation. In a new survey from Pew Research Center, 63 percent of respondents said that they believe altered video and images are a significant source of confusion when it comes to interpreting news quality. That loss of trust works in favor of those willing to lie, defame, and harass to gain attention.

    As Daniel Kreiss and others have pointed out, people don’t just share content because they believe it. They do it for a host of reasons, not the least of which is simply because a message speaks to what users see as an implicit truth of the world even as they know it is not factually true. Researchers have found that creating and sharing hateful, false, or faked content is often rewarded on platforms like Facebook.

    The looming threat of the deepfake is worth attention—from politicians, like at the upcoming hearing; from journalists; from researchers; and especially from the public that will ultimately be the audience for these things. But make no mistake: Disinformation doesn’t have to be high tech to cause serious damage.

    #Fake_news #Deep_fake #Cheap_fake #Nancy_Pelosi #Médias_sociaux

  • Uber is creating a new gig economy that turns workers into customers.
    https://slate.com/technology/2019/03/uber-gig-workers-customers.html

    Uber brings the technology culture of Silicon Valley to the world of work. Facebook sparked a public outcry after it quietly experimented with the psychological states of select users by displaying happier or sadder posts to them in their news feed to study the effects of emotional contagion. People were outraged both because they didn’t want to be the unwitting subjects of mood experimentation, and also because the experiment contradicted the idea that a neutral, objective, and benevolent algorithm curates their news feed. Similarly, Uber experimented with driver pay by implementing upfront pricing without alerting drivers or adjusting their contracts, until months later, after drivers crowdsourced evidence of a new pay policy.

    When Uber takes advantage of the unwitting users of its technology, it could be within its rights to do so, though its particular machinations actually contradict the company’s own description of its business model: In legal forums and in its contracts with drivers, the company says it provides a platform that connects all its users, implying that its technology is neutral, like a credit card processor. In one court hearing, Uber’s lawyers used rough metaphors to explain this logic in oral arguments, saying, “People demand ice cream. We have vendors, vendors who produce ice cream that are able, through our software, demanded—on demand to people that want ice cream. We facilitate that transaction. We’re not in the ice cream business, you know.”

    But Uber is in the figurative ice cream business. Uber monitors drivers through the data they generate on the job and controls their workplace behavior through various methods, from in-app behavioral nudges that influence when and where drivers work to the threat of account deactivation if drivers don’t follow some of Uber’s behavioral “suggestions.” Yet Uber also explicitly adopts a model of customer service communications in managing its workers as if they were mere consumers. In fact, beyond intense supervision, Uber controls drivers by creating an appeals process that limits their ability to find resolutions to their concerns.

    The very vocabulary that Uber deploys to describe its drivers and its own practices reinforces this view of labor: It treats its workers as “end users” and “customers” of its software. The terms are used in Uber’s lawsuits, and a senior Uber employee casually referred to the company’s workforce as “end users” in conversation with me. The rhetorical impact of that language is clever. By fudging the terms of employment within its control, Uber provides us with a template for questioning what we know about employment relationships that can create legal distance between a worker and an employer. And it ushers in a new way of doing business all while the same old problems, like workplace harassment, persist under the veneer of technological neutrality.

    The central conflict of how to categorize a driver—and how to consider work in the sharing economy more broadly—animates the conflict between labor advocates and Uber. And Uber’s defense of their labor practices articulate dynamic changes in how employment and consumption are negotiated in digital spaces. The question in this new economy is whether algorithmic management really creates a qualitative distinction between work and consumption. Because by encouraging this distinction and describing its technology as a way to merely connect two groups of users, Uber can have its cake and eat it too, avoiding responsibility for prospective labor law violations while its ostensibly neutral algorithms give the company vast leverage over how drivers do their work.

    #Uber #Industrie_influence #Travail

  • Why Is It So Hard to Quit Amazon ? Because Shopping Is Labor.
    https://slate.com/technology/2018/11/quitting-amazon-prime-boycott-shopping-labor.html

    Prime has helped overworked and underpaid Americans stretch their money and time. No wonder it’s so hard to quit. There are more than 100 million paid subscriptions to Amazon Prime, the inordinately convenient service that provides fast shipping, good deals, and prestige television to consumers. The majority of U.S. households (51 percent) have an account. Increasingly, people are wising up to both the company’s serious threat as a monopoly and its willingness to seek domination at the (...)

    #Amazon #travail #solutionnisme #domination

  • Cambridge Analytica demonstrates that Facebook needs to give researchers more access.
    https://slate.com/technology/2018/03/cambridge-analytica-demonstrates-that-facebook-needs-to-give-researchers-more

    In a 2013 paper, psychologist Michal Kosinski and collaborators from University of Cambridge in the United Kingdom warned that “the predictability of individual attributes from digital records of behavior may have considerable negative implications,” posing a threat to “well-being, freedom, or even life.” This warning followed their striking findings about how accurately the personal attributes of a person (from political leanings to intelligence to sexual orientation) could be inferred from nothing but their Facebook likes. Kosinski and his colleagues had access to this information through the voluntary participation of the Facebook users by offering them the results of a personality quiz, a method that can drive viral engagement. Of course, one person’s warning may be another’s inspiration.

    Kosinski’s original research really was an important scientific finding. The paper has been cited more than 1,000 times and the dataset has spawned many other studies. But the potential uses for it go far beyond academic research. In the past few days, the Guardian and the New York Times have published a number of new stories about Cambridge Analytica, the data mining and analytics firm best known for aiding President Trump’s campaign and the pro-Brexit campaign. This trove of reporting shows how Cambridge Analytica allegedly relied on the psychologist Aleksandr Kogan (who also goes by Aleksandr Spectre), a colleague of the original researchers at Cambridge, to gain access to profiles of around 50 million Facebook users.

    According to the Guardian’s and New York Times’ reporting, the data that was used to build these models came from a rough duplicate of that personality quiz method used legitimately for scientific research. Kogan, a lecturer in another department, reportedly approached Kosinski and their Cambridge colleagues in the Psychometric Centre to discuss commercializing the research. To his credit, Kosinski declined. However, Kogan built an app named thisismydigitallife for his own startup, Global Science Research, which collected the same sorts of data. GSR paid Mechanical Turk workers (contrary to the terms of Mechanical Turk) to take a psychological quiz and provide access to their Facebook profiles. In 2014, under the contract with the parent company of Cambridge Analytica, SCL, that data was harvested and used to build a model of 50 million U.S. Facebook users that included allegedly 5,000 data points on each user.

    So if the Facebook API allowed Kogan access to this data, what did he do wrong? This is where things get murky, but bear with us. It appears that Kogan deceitfully used his dual roles as a researcher and an entrepreneur to move data between an academic context and a commercial context, although the exact method of it is unclear. The Guardian claims that Kogan “had a licence from Facebook to collect profile data, but it was for research purposes only” and “[Kogan’s] permission from Facebook to harvest profiles in large quantities was specifically restricted to academic use.” Transferring the data this way would already be a violation of the terms of Facebook’s API policies that barred use of the data outside of Facebook for commercial uses, but we are unfamiliar with Facebook offering a “license” or special “permission” for researchers to collect greater amounts of data via the API.

    Regardless, it does appear that the amount of data thisismydigitallife was vacuuming up triggered a security review at Facebook and an automatic shutdown of its API access. Relying on Wylie’s narrative, the Guardian claims that Kogan “spoke to an engineer” and resumed access:

    “Facebook could see it was happening,” says Wylie. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use. So they were like, ‘Fine’.”

    Kogan claims that he had a close working relationship with Facebook and that it was familiar with his research agendas and tools.

    A great deal of research confirms that most people don’t pay attention to permissions and privacy policies for the apps they download and the services they use—and the notices are often too vague or convoluted to clearly understand anyway. How many Facebook users give third parties access to their profile so that they can get a visualization of the words they use most, or to find out which Star Wars character they are? It isn’t surprising that Kosinski’s original recruitment method—a personality quiz that provided you with a psychological profile of yourself based on a common five-factor model—resulted in more than 50,000 volunteers providing access to their Facebook data. Indeed, Kosinski later co-authored a paper detailing how to use viral marketing techniques to recruit study participants, and he has written about the ethical dynamics of utilizing friend data.

    #Facebook #Cambridge_analytica #Recherche

    • À cela — l’étude n’en fait pas mention — on peut ajouter une autre peine pour la victime : l’isolement administratif. En effet, vous n’êtes pas sans savoir que pour fluidifier un certain nombre de démarches, on pousse vers la dématérialisation des procédures. Or, si une femme victime de violences conjugales doit couper ses accès numériques, elle risque aussi de se voir priver de ses droits (aides sociales, démarches administratives, etc.). De la même manière, changer de numéro de téléphone portable implique aussi de faire des changements sur certains comptes en ligne, notamment bancaires. On risque donc de précariser encore plus des personnes qui sont déjà fragiles.

    • Sur le même sujet : https://slate.com/technology/2018/03/apps-cant-stop-exes-who-use-technology-for-stalking.html, qui explique aussi que le modèle de menace habituel ne s’applique pas dans ces situations.

      When you learn that your privacy has been compromised, the common advice is to prevent additional access — delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages — but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

      via https://www.schneier.com/blog/archives/2018/03/intimate_partne.html