• Et si la fin du monde était une bonne nouvelle ?

    Tout juste dix ans plus tard, ces raisonnements paraissent antédiluviens. Deux barrières mentales identifiées par cette étude demeurent néanmoins des obstacles à l’action : la première concerne nos habitudes, la seconde le sentiment d’impuissance. “Les habitudes quasi automatiques sont extrêmement résistantes au changement, notait l’étude. Les gens pensent aussi que leur action est trop limitée pour faire une quelconque différence, et ils choisissent de ne rien faire.”

    Une observation que reprend Wallace-Wells dans son livre : “La posture intellectuelle de l’impuissance semble particulièrement nous convenir.” Alors que le doute et le déni autour du dérèglement climatique ont reculé, ils ont été remplacés par des sentiments – tout aussi paralysants – comme la panique, l’angoisse et la résignation.

  • The Software That Shapes Workers’ Lives | The New Yorker

    How could I know which had been made ethically and which hadn’t?

    Answering this question can be surprisingly difficult. A few years ago, while teaching a class about global labor at the University of California, Los Angeles, I tried assigning my students the task of analyzing the “supply chain”—the vast network of factories, warehouses, and shipping conduits through which products flow—by tracing the components used in their electronic devices. Almost immediately, I hit a snag: it turns out that even companies that boast about “end-to-end visibility” and “supply-chain transparency” may not know exactly where their components come from. This ignorance is built into the way supply chains work. The housing of a television, say, might be built in a small factory employing only a few people; that factory interacts only with the suppliers and buyers immediately adjacent to it in the chain—a plastic supplier on one side, an assembly company on the other. This arrangement encourages modularity, since, if a company goes out of business, its immediate partners can replace it without consulting anyone. But it also makes it hard to identify individual links in the chain. The resilient, self-healing quality of supply chains derives, in part, from the fact that they are unsupervised.

    When people try to picture supply chains, they often focus on their physical infrastructure. In Allan Sekula’s book “Fish Story,” a volume of essays and photographs produced between 1989 and 1995, the writer and photographer trains his lens on ports, harbors, and the workers who pilot ships between them; he reveals dim shipboard workspaces and otherworldly industrial zones. In “The Forgotten Space,” a documentary that Sekula made with the film theorist Noël Burch, in 2010, we see massive, gliding vessels, enormous machines, and people rummaging through the detritus around ports and harbors. Sekula’s work suggests the degree to which our fantasy of friction-free procurement hides the real, often gruelling, work of global shipping and trade.

    But supply chains aren’t purely physical. They’re also made of information. Modern supply-chain management, or S.C.M., is done through software. The people who design and coördinate supply chains don’t see warehouses or workers. They stare at screens filled with icons and tables. Their view of the supply chain is abstract. It may be the one that matters most.

    Most of the time, the work of supply-chain management is divided up, with handoffs where one specialist passes a package of data to another. No individual is liable to possess a detailed picture of the whole supply chain. Instead, each S.C.M. specialist knows only what her neighbors need.

    In such a system, a sense of inevitability takes hold. Data dictates a set of conditions which must be met, but there is no explanation of how that data was derived; meanwhile, the software takes an active role, tweaking the plan to meet the conditions as efficiently as possible. sap’s built-in optimizers work out how to meet production needs with the least “latency” and at the lowest possible costs. (The software even suggests how tightly a container should be packed, to save on shipping charges.) This entails that particular components become available at particular times. The consequences of this relentless optimization are well-documented. The corporations that commission products pass their computationally determined demands on to their subcontractors, who then put extraordinary pressure on their employees. Thus, China Labor Watch found that workers in Heyuan City, China, tasked with producing Disney’s Princess Sing & Sparkle Ariel Bath Doll—retail price today, $26.40—work twenty-six days a month, assembling between eighteen hundred and twenty-five hundred dolls per day, and earning one cent for each doll they complete.

    Still, from a worker’s point of view, S.C.M. software can generate its own bullwhip effect. At the beginning of the planning process, product requirements are fairly high-level. But by the time these requirements reach workers, they have become more exacting, more punishing. Small reductions in “latency,” for instance, can magnify in consequence, reducing a worker’s time for eating her lunch, taking a breath, donning safety equipment, or seeing a loved one.

    Could S.C.M. software include a “workers’-rights” component—a counterpart to PP/DS, incorporating data on working conditions? Technically, it’s possible. sap could begin asking for input about worker welfare. But a component like that would be at cross-purposes with almost every other function of the system. On some level, it might even undermine the purpose of having a system in the first place. Supply chains create efficiency in part through the distribution of responsibility. If a supervisor at a toy factory objects to the production plan she’s received, her boss can wield, in his defense, a PP/DS plan sent to him by someone else, who worked with data produced by yet another person. It will turn out that no one in particular is responsible for the pressures placed on the factory. They flow from the system—a system designed to be flexible in some ways and rigid in others.

    #Algorithmes #SAP #Droit_travail #Industrie_influence

  • After Years of Abusive E-mails, the Creator of Linux Steps Aside | The New Yorker

    Valerie Aurora, a former Linux-kernel contributor, told me that a decade of working in the Linux community convinced her that she could not rise in its hierarchy as a woman. Aurora said that the concept of Torvalds and other powerful tech figures being “equal-opportunity assholes” was false and sexist:

    Vu l’exemple opposé donné sur Python, ça donne envie de se mettre à ce langage en tant que femme.

    #sexisme #code #développement #domination #Torvalds #linux

  • After Years of Abusive E-mails, the Creator of Linux Steps Aside | The New Yorker

    Torvalds’s decision to step aside came after The New Yorker asked him a series of questions about his conduct for a story on complaints about his abusive behavior discouraging women from working as Linux-kernel programmers. In a response to The New Yorker, Torvalds said, “I am very proud of the Linux code that I invented and the impact it has had on the world. I am not, however, always proud of my inability to communicate well with others—this is a lifelong struggle for me. To anyone whose feelings I have hurt, I am deeply sorry.”

    Although it distributes its product for free, the Linux project has grown to resemble a blue-chip tech company. Nominally a volunteer enterprise, like Wikipedia, Linux, in fact, is primarily sustained by funds and programmers from the world’s large technology companies. Intel, Google, IBM, Samsung, and other companies assign programmers to help improve the code. Of the eighty thousand fixes and improvements to Linux made in the past year, more than ninety per cent were produced by paid programmers, the foundation reported in 2017; Intel employees alone were responsible for thirteen per cent of them. These same companies, and hundreds of others, covered the foundation’s roughly fifty-million-dollar annual budget.

    Linux’s élite developers, who are overwhelmingly male, tend to share their leader’s aggressive self-confidence. There are very few women among the most prolific contributors, though the foundation and researchers estimate that roughly ten per cent of all Linux coders are women. “Everyone in tech knows about it, but Linus gets a pass,” Megan Squire, a computer-science professor at Elon University, told me, referring to Torvalds’s abusive behavior. “He’s built up this cult of personality, this cult of importance.”

    Valerie Aurora, a former Linux-kernel contributor, told me that a decade of working in the Linux community convinced her that she could not rise in its hierarchy as a woman. Aurora said that the concept of Torvalds and other powerful tech figures being “equal-opportunity assholes” was false and sexist: when she and Sharp adopted Torvalds’ aggressive communication style, they experienced retaliation. “Basically, Linus has created a model of leadership—which is being an asshole,” Aurora told me. “Sage and I can tell you that being an asshole was not available to us. If we were an asshole, we got smacked for it, got punished, got held back. I tried it.”

    Torvalds, by contrast, long resisted the idea that the Linux programming team needed to become more diverse, just as he resisted calls to tone down his language. In 2015, Sharp advocated for a first-ever code of conduct for Linux developers. At a minimum, they hoped for a code that would ban doxxing—the releasing of personal information online to foment harassment—and threats of violence in the community. Instead, Torvalds accepted a programming fix provocatively titled “Code of Conflict,” which created a mechanism for filing complaints more generally. In the three years since then, no developers have been disciplined for abusive comments. Sharp, who was employed by Intel at the time, said they carefully avoided Linux kernel work thereafter.

    #Linux #Linus_Torvalds #Genre #Développeurs #Logiciels_libres #Machisme

  • How to Fix Facebook | The New Yorker

    Adrian, I think you’re exactly right that this is both a technical problem and a human problem, and that Zuckerberg is pushing the narrative of bad actors who exploited a loophole. But if we can call it a loophole at all, then it’s a policy loophole: Facebook was operating exactly as it was intended to. It was and is an ad network. The scope of the metadata that developers could harvest (and retain) probably isn’t surprising to anyone who has worked in ad tech, or at any tech company, really. Facebook trusted developers to do the right thing, and I think this reliance on good faith—a phrase that gets a lot of exercise in the tech industry—tracks with a sort of tech-first, developer-is-king mind-set.

    In some ways, this trust in developers is a product of carelessness, but it’s also a product of a lack of imagination: it rests on the assumption that what begins as a technical endeavor remains a technical endeavor. It also speaks to a greater tension in the industry, I think, between technical interests (what’s exciting, new, useful for developers) and the social impact of these products. I don’t know how software is built at Facebook, but I imagine that the engineering team working on the Graph A.P.I., a developer tool that enables interaction with the platform’s user relationships, probably wasn’t considering the ways in which metadata could be exploited. It’s not necessarily their job to hypothesize about developers who might create, say, fifteen apps, then correlate the data sets in order to build out comprehensive user profiles. That said, maybe it should be the job of the product-management team. I don’t mean to lean too heavily on conjecture; Facebook is a black box, and it’s nearly impossible to know the company’s internal politics.

    In any case, the underlying issues aren’t specific to Facebook. The question of good faith is an industry-wide problem. Data retention is an industry-wide problem. Transparency is touted as a virtue in Silicon Valley, but when it comes to the end user, transparency is still treated as more of a privilege than a right.

    Anna Wiener: Nathan, I think your point about Facebook’s commercial orientation is really important. Facebook’s customers are not its users. It’s a developer-oriented attention magnet that makes its money from advertisers based on the strength of its users’ data. For Facebook to truly prioritize user privacy could mean the collapse of its revenue engine. So when Zuckerberg says, “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” it’s very strange, because it assumes that Facebook’s primary orientation is toward users. Zuckerberg runs a business, not a community; my understanding is that Facebook sees itself as a software company, not a social institution, and behaves accordingly.

    #Facebook #Médias_sociaux