Culture : TV, Movies, Music, Art, and Theatre News and Reviews

/culture

  • Is Alexandria Ocasio-Cortez an Insider Now ? | The New Yorker
    https://www.newyorker.com/culture/the-new-yorker-interview/is-alexandria-ocasio-cortez-an-insider-now

    Très bel interview d’Alexandria Ocasio-Cortez. Partant de son expérience et proposant un comportement pour que les choses changent. Un comportement pus qu’un programme.

    The social-media folks at The New Yorker invited people to propose questions for you via Instagram. Hope is the theme that is the center of almost all of these. If I can distill them, the most basic question is, What would you say to people, particularly young people, who have lost hope?

    I’ve been there. And what I can say is that, when you’re feeling like you’ve lost hope, it’s a very passive experience, which is part of what makes it so depressing.

    And that’s what I had to go through. There was all this hope when Obama was elected, in 2008. And, at the end of the day, a lot of people that had hope in our whole country had those hopes dashed.

    I graduated. My dad died. My family had medical debt, because we live in the jankiest medical system in the developed world. My childhood home was on the precipice of being taken away by big banks. I’d be home, and there’d be bankers in cars parked in front of my house, taking pictures for the inevitable day that they were going to kick us out.

    I was supposed to be the great first generation to go to college, and I graduated into a recession where bartending, legitimately, and waitressing, legitimately, paid more than any college-level entry job that was available to me. I had a complete lack of hope. I saw a Democratic Party that was too distracted by institutionalized power to stand up for working people. And I decided this is bullshit. No one, absolutely no one, cares about people like me, and this is hopeless. And I lost hope.

    How did that manifest itself?

    It manifested in depression. Feeling like you have no agency, and that you are completely subject to the decisions of people who do not care about you, is a profoundly depressing experience. It’s a very invisibilizing experience. And I lived in that for years. This is where sometimes what I do is speak to the psychology of our politics rather than to the polling of our politics. What’s really important for people to understand is that to change that tide and to actually have this well of hope you have to operate on your direct level of human experience.

    When people start engaging individually enough, it starts to amount to something bigger. We have a culture of immediate gratification where if you do something and it doesn’t pay off right away we think it’s pointless.

    But, if more people start to truly cherish and value the engagement and the work in their own back yard, it will precipitate much larger change. And the thing about people’s movements is that the opposite is very top-down. When you have folks with a profound amount of money, power, influence, and they really want to make something happen, they start with media. You look at these right-wing organizations, they create YouTube channels. They create their podcast stars. They have Fox News as their own personal ideological television outlet.

    Legitimate change in favor of public opinion is the opposite. It takes a lot of mass-public-building engagement, unrecognized work until it gets to the point that it is so big that to ignore it threatens the legitimacy of mass-media outlets, institutions of power, etc. It has to get so big that it is unignorable, in order for these positions up top to respond. And so people get very discouraged here.

    There is no movement, there is no effort, there is no unionizing, there is no fight for the vote, there is no resistance to draconian abortion laws, if people think that the future is baked in and nothing is possible and that we’re doomed. Even on climate—or especially on climate. And so the day-to-day of my day job is frustrating. So is everyone else’s. I ate shit when I was a waitress and a bartender, and I eat shit as a member of Congress. It’s called a job, you know?

    So, yes, I deal with the wheeling and dealing and whatever it is, that insider stuff, and I advance amendments that some people would criticize as too little, etc. I also advance big things that people say are unrealistic and naïve. Work is like that. It is always the great fear when it comes to work or pursuing anything. You want to write something, and, in your head, it’s this big, beautiful Nobel Prize-winning concept. And then you are humbled by the words that you actually put on paper.

    And that is the work of movement. That is the work of organizing. That is the work of elections. That is the work of legislation. That is the work of theory, of concepts, you know? And that is what it means to be in the arena.

    #Alexandria_Ocasio_Cortez

  • 62 of the Best Documentaries of All Time | The New Yorker
    https://www.newyorker.com/culture/the-front-row/sixty-two-films-that-shaped-the-art-of-documentary-filmmaking

    Nonfiction filmmaking has been undergoing an aesthetic revolution over the past decade or so, one that parallels the major change in fiction filmmaking, namely, a shift toward personalization. The main expression and key movement in that change is mumblecore, which has exerted a wide-ranging influence through its luminaries, its aesthetic, and its ideas. Mumblecore’s documentary counterpart is creative nonfiction, an idea that’s rooted in the filmmaker’s presence, be it physical or virtual, and in the conspicuous display of process.

    The artistic preoccupations of the new generation of documentary filmmakers don’t break with those of earlier generations; rather, they have their roots in decades-old films, in which the same ideas and practices sometimes turn up in forms—embodying the filmmakers’ relationship to their subjects—that seem daringly original even now. The most artistically advanced documentaries are those in which the participants are engaging conspicuously with the filmmakers; in their most radical forms, they show the influences, inspirations, or perturbations that the people onscreen experience from the filmmakers’ presence. Which is another way of saying that, although documentaries follow real people, their crucial material and subject is nonetheless performance.

  • Facebook Wants Us to Live in the Metaverse | The New Yorker
    https://www.newyorker.com/culture/infinite-scroll/facebook-wants-us-to-live-in-the-metaverse

    A shift toward the digital space of the metaverse is already beginning to take place, though not yet under Mark Zuckerberg’s domain.Photograph by David Paul Morris / Bloomberg / Getty

    In a Facebook earnings call last week, Mark Zuckerberg outlined the future of his company. The vision he put forth wasn’t based on advertising, which provides the bulk of Facebook’s current profits, or on an increase in the over-all size of the social network, which already has nearly three billion monthly active users. Instead, Zuckerberg said that his goal is for Facebook to help build the “metaverse,” a Silicon Valley buzzword that has become an obsession for anyone trying to predict, and thus profit from, the next decade of technology. “I expect people will transition from seeing us primarily as a social-media company to seeing us as a metaverse company,” Zuckerberg said. It was a remarkable pivot in messaging for the social-media giant, especially given the fact that the exact meaning of the metaverse, and what it portends for digital life, is far from clear. In the earnings call, Zuckerberg offered his own definition. The metaverse is “a virtual environment where you can be present with people in digital spaces,” he said. It’s “an embodied Internet that you’re inside of rather than just looking at. We believe that this is going to be the successor to the mobile Internet.”

    Like the term “cyberspace,” a coinage of the fiction writer William Gibson, the term “metaverse” has literary origins. In Neal Stephenson’s novel “Snow Crash,” from 1992, the protagonist, Hiro, a sometime programmer and pizza-delivery driver in a dystopian Los Angeles, immerses himself in the metaverse, “a computer-generated universe that his computer is drawing onto his goggles and pumping into his earphones.” It’s an established part of the book’s fictional world, a familiar aspect of the characters’ lives, which move fluidly between physical and virtual realms. On a black ground, below a black sky, like eternal night in Las Vegas, Stephenson’s metaverse is made up of “the Street,” a sprawling avenue where the buildings and signs represent “different pieces of software that have been engineered by major corporations.” The corporations all pay an entity called the Global Multimedia Protocol Group for their slice of digital real estate. Users also pay for access; those who can only afford cheaper public terminals appear in the metaverse in grainy black-and-white.

    Stephenson’s fictional metaverse may not be that far off from what today’s tech companies are now developing. Imagine, like Hiro, donning goggles (perhaps those produced by Oculus, which Facebook owns), controlling a three-dimensional virtual avatar, and browsing a series of virtual storefronts, the metaverse equivalents of different platforms like Instagram (which Facebook also owns), Netflix, or the video game Minecraft. You might gather with friends in the virtual landscape and all watch a movie in the same virtual theatre. “You’re basically going to be able to do everything that you can on the Internet today as well as some things that don’t make sense on the Internet today, like dancing,” Zuckerberg said. In the future we might walk through Facebook, wear clothes on Facebook, host virtual parties on Facebook, or own property in the digital territory of Facebook. Each activity in what we once thought of as the real world will develop a metaverse equivalent, with attendant opportunities to spend money doing that activity online. “Digital goods and creators are just going to be huge,” Zuckerberg said.

    This shift is already beginning to take place, though not yet under Facebook’s domain. The video game Second Life, which was released in 2003 by Linden Lab, created a virtual world where users could wander, building their own structures; land can be bought there for either U.S. dollars or the in-game currency, Linden Dollars. Roblox, a children’s video game launched in 2006, has lately evolved into an immersive world in which players can design and sell their own creations, from avatar costumes to their own interactive experiences. Rather than a single game, Roblox became a platform for games. Fortnite, released in 2017, evolved from an online multiplayer free-for-all shoot-’em-up into a more diffuse space in which players can collaboratively build structures or attend concerts and other live in-game events. (Ariana Grande just announced an upcoming virtual show there.) Players of Fortnite buy customized avatar “skins” and motions or gestures that the avatars can perform—perhaps that’s where Zuckerberg got his reference to dancing. If any company is primed to profit from the metaverse it’s the maker of Fortnite, Epic Games, which owns a game marketplace and also sells Unreal Engine, the three-dimensional design software that is used in every corner of the gaming industry and in streaming blockbusters such as the “Star Wars” TV series “The Mandalorian.” In April, the company announced a billion-dollar funding round to support its “vision for the metaverse.”

    Video From The New Yorker

    Surfing on Kelly Slater’s Machine-Made Wave

    No single company is meant to own or run the metaverse, however; it requires coöperation to create consistency. Assets that one acquires in the metaverse will hypothetically be portable, moving even between platforms owned by different corporations. This synchronization might be enabled by blockchain technology like cryptocurrencies and non-fungible tokens, which are defined by their immutable record keeping. If you bought an N.F.T. avatar from the online society Bored Ape Yacht Club, Fortnite could theoretically verify your ownership on the blockchain and then allow you to use the avatar within its game world. The same avatar might show up on Roblox, too. The various realms are supposed to maintain “interoperability,” as Zuckerberg said in the earnings call, linking together to form the wider hypothetical metaverse, the way every Web site exists non-hierarchically on the open protocol of the Internet.

    The metaverse represents a techno-optimist vision for a future in which culture can exist in all forms at once. Intellectual property—a phrase increasingly applied to creative output of any kind—can move seamlessly among movies, video games, and virtual-reality environments. It’s a tantalizing possibility for the corporate producers of culture, who will profit from their I.P. wherever it goes. Disney’s Marvel pantheon of superhero narratives already amounts to a “cinematic universe”; why not unleash it into every possible platform simultaneously? In Fortnite, as the pro-metaverse investor Matthew Ball wrote in an influential essay last year, “You can literally wear a Marvel character’s costume inside Gotham City, while interacting with those wearing legally licensed N.F.L. uniforms.” (How appealing you find this may depend on how addicted you are to logos.) In the future, users’ own creations may attain the same kind of portability and profitability, letting fan concepts compete with Marvel just as self-published blogs once disrupted newspapers.

    Judging from Facebook’s growth strategy over the past decade, though, Zuckerberg won’t be satisfied with making his company one component of a multiplatform metaverse. Just as the company bought, absorbed, and outcompeted smaller social-media platforms until it resembled a monopoly, it may try to control the entire space in which users dwell so that it will be able to charge us rents. Facebook may, indeed, create virtual real estate that online small businesses will have to rent in order to sell their wares, or build an in-game meeting space where an impressive, expensive avatar will be key to networking, like the equivalent of a fancy Zoom background. Our physical lives are already so saturated with Facebook and its other properties that the company must build new structures for the virtual iterations of our lives, and then dominate those as well in order to keep expanding.

    Zuckerberg’s comments brought to my mind an earlier iteration of online life, a game and social space called Neopets. Neopets launched in 1999; I remember playing it in middle school, trading strategies with friends. In the game, the player takes care of small digital creatures, feeding and grooming them as well as buying accessories with “Neopoints” earned from in-game activities. It was a point of pride and a form of self-expression, albeit a nerdy one, to have a highly developed profile in the game. In the metaverse Facebook envisions, however, you are the Neopet, and your in-game activities may affect every sphere of life that Facebook already touches: careers, relationships, politics. In Zuckerberg’s vision, Neopoints become Facebook dollars, only usable on the platform; your self-presentation online becomes a choice limited to options that Facebook provides. A blue-and-gray virtual universe looms. The more immersive it is, the more inescapable it becomes, like an all-encompassing social-media feed, with all the problems thereof.

    #Mateverse #Culture_Numérique

  • The Age of Instagram Face | The New Yorker
    https://www.newyorker.com/culture/decade-in-review/the-age-of-instagram-face

    This past summer, I booked a plane ticket to Los Angeles with the hope of investigating what seems likely to be one of the oddest legacies of our rapidly expiring decade: the gradual emergence, among professionally beautiful women, of a single, cyborgian face. It’s a young face, of course, with poreless skin and plump, high cheekbones. It has catlike eyes and long, cartoonish lashes; it has a small, neat nose and full, lush lips. It looks at you coyly but blankly, as if its owner has taken half a Klonopin and is considering asking you for a private-jet ride to Coachella. The face is distinctly white but ambiguously ethnic—it suggests a National Geographic composite illustrating what Americans will look like in 2050, if every American of the future were to be a direct descendant of Kim Kardashian West, Bella Hadid, Emily Ratajkowski, and Kendall Jenner (who looks exactly like Emily Ratajkowski). “It’s like a sexy . . . baby . . . tiger,” Cara Craig, a high-end New York colorist, observed to me recently. The celebrity makeup artist Colby Smith told me, “It’s Instagram Face, duh. It’s like an unrealistic sculpture. Volume on volume. A face that looks like it’s made out of clay.”

    Instagram, which launched as the decade was just beginning, in October, 2010, has its own aesthetic language: the ideal image is always the one that instantly pops on a phone screen. The aesthetic is also marked by a familiar human aspiration, previously best documented in wedding photography, toward a generic sameness. Accounts such as Insta Repeat illustrate the platform’s monotony by posting grids of indistinguishable photos posted by different users—a person in a yellow raincoat standing at the base of a waterfall, or a hand holding up a bright fall leaf. Some things just perform well.

    The human body is an unusual sort of Instagram subject: it can be adjusted, with the right kind of effort, to perform better and better over time. Art directors at magazines have long edited photos of celebrities to better match unrealistic beauty standards; now you can do that to pictures of yourself with just a few taps on your phone.

    Snapchat, which launched in 2011 and was originally known as a purveyor of disappearing messages, has maintained its user base in large part by providing photo filters, some of which allow you to become intimately familiar with what your face would look like if it were ten-per-cent more conventionally attractive—if it were thinner, or had smoother skin, larger eyes, fuller lips. Instagram has added an array of flattering selfie filters to its Stories feature. FaceTune, which was released in 2013 and promises to help you “wow your friends with every selfie,” enables even more precision. A number of Instagram accounts are dedicated to identifying the tweaks that celebrities make to their features with photo-editing apps. Celeb Face, which has more than a million followers, posts photos from the accounts of celebrities, adding arrows to spotlight signs of careless FaceTuning. Follow Celeb Face for a month, and this constant perfecting process begins to seem both mundane and pathological. You get the feeling that these women, or their assistants, alter photos out of a simple defensive reflex, as if FaceTuning your jawline were the Instagram equivalent of checking your eyeliner in the bathroom of the bar.

    “I think ninety-five per cent of the most-followed people on Instagram use FaceTune, easily,” Smith told me. “And I would say that ninety-five per cent of these people have also had some sort of cosmetic procedure. You can see things getting trendy—like, everyone’s getting brow lifts via Botox now. Kylie Jenner didn’t used to have that sort of space around her eyelids, but now she does.”

    Twenty years ago, plastic surgery was a fairly dramatic intervention: expensive, invasive, permanent, and, often, risky. But, in 2002, the Food and Drug Administration approved Botox for use in preventing wrinkles; a few years later, it approved hyaluronic-acid fillers, such as Juvéderm and Restylane, which at first filled in fine lines and wrinkles and now can be used to restructure jawlines, noses, and cheeks. These procedures last for six months to a year and aren’t nearly as expensive as surgery. (The average price per syringe of filler is six hundred and eighty-three dollars.) You can go get Botox and then head right back to the office.

    Ideals of female beauty that can only be met through painful processes of physical manipulation have always been with us, from tiny feet in imperial China to wasp waists in nineteenth-century Europe. But contemporary systems of continual visual self-broadcasting—reality TV, social media—have created new disciplines of continual visual self-improvement. Social media has supercharged the propensity to regard one’s personal identity as a potential source of profit—and, especially for young women, to regard one’s body this way, too. In October, Instagram announced that it would be removing “all effects associated with plastic surgery” from its filter arsenal, but this appears to mean all effects explicitly associated with plastic surgery, such as the ones called “Plastica” and “Fix Me.” Filters that give you Instagram Face will remain. For those born with assets—natural assets, capital assets, or both—it can seem sensible, even automatic, to think of your body the way that a McKinsey consultant would think about a corporation: identify underperforming sectors and remake them, discard whatever doesn’t increase profits and reorient the business toward whatever does.

    Another client is Kim Kardashian West, whom Colby Smith described to me as “patient zero” for Instagram Face. (“Ultimately, the goal is always to look like Kim,” he said.) Kardashian West, who has inspired countless cosmetically altered doppelgängers, insists that she hasn’t had major plastic surgery; according to her, it’s all just Botox, fillers, and makeup. But she also hasn’t tried to hide how her appearance has changed. In 2015, she published a coffee-table book of selfies, called “Selfish,” which begins when she is beautiful the way a human is beautiful and ends when she’s beautiful in the manner of a computer animation.

    On the way to Diamond’s office, I had passed a café that looked familiar: pale marble-topped tables, blond-wood floors, a row of Prussian-green snake plants, pendant lamps, geometrically patterned tiles. The writer Kyle Chayka has coined the term “AirSpace” for this style of blandly appealing interior design, marked by an “anesthetized aesthetic” and influenced by the “connective emotional grid of social media platforms”—these virtual spaces where hundreds of millions of people learn to “see and feel and want the same things.” WeWork, the collapsing co-working giant—which, like Instagram, was founded in 2010—once convinced investors of a forty-seven-billion-dollar vision in which people would follow their idiosyncratic dreams while enmeshed in a global network of near-indistinguishable office spaces featuring reclaimed wood, neon signs, and ficus trees.

    #Instagram #Chirurgie_esthétique #Botox #Kim_Kardashian #Oppression_physique

  • Slack Is the Right Tool for the Wrong Way to Work | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/slack-is-the-right-tool-for-the-wrong-way-to-work

    Though Slack improved the areas where e-mail was lacking in an age of high message volume, it simultaneously amplified the rate at which this interaction occurs. Data gathered by the software firm RescueTime estimate that employees who use Slack check communications tools more frequently than non-users, accessing them once every five minutes on average—an absurdly high rate of interruption. Neuroscientists and psychologists teach us that our attention is fundamentally single-tasked, and switching it from one target to another is detrimental to productivity. We’re simply not wired to monitor an ongoing stream of unpredictable communication at the same time that we’re trying to also finish actual work. E-mail introduced this problem of communication-driven distraction, but Slack pushed it to a new extreme. We both love and hate Slack because this company built the right tool for the wrong way to work.

    I do not dislike Slack as much as people assume given that I wrote a book titled “Deep Work,” which advocates for the importance of long, undistracted stretches of work. The acceleration of interruption is a problem, but e-mail has its limitations, so it makes sense that companies committed to ad-hoc messaging as their central organizing principle would want to try Slack. If this tool represented the culmination of our attempts to figure out how to best work together in a digital age, I’d be more concerned, but Slack seems to be more transient. It’s a short-term optimization of our first hasty attempts to make sense of a high-tech professional world that will be followed by more substantial revolutions. The future of office work won’t be found in continuing to reduce the friction involved in messaging but, instead, in figuring out how to avoid the need to send so many messages in the first place.

    #Slack #Collaboration #Travail

  • When James Baldwin Went South | The New Yorker

    https://www.newyorker.com/culture/video-dept/when-james-baldwin-went-south

    In 1979, #James_Baldwin approached The New Yorker with an idea for a long essay: he would travel to the cities in the South that were central to the civil-rights struggle—Selma, Birmingham, Atlanta, and elsewhere—and consider what the fallen heroes of the movement, including Martin Luther King, Jr., and Malcolm X, would make of the world that had and hadn’t emerged after their deaths. The project soon swelled into a proposal for a book that would be called “Remember This House,” which Hilton Als refers to as “a book that he does not want to write but knows he must write.” Neither the essay nor the book was ever published. Instead, what came out of Baldwin’s trip was the documentary “I Heard It Through the Grapevine,” directed by Dick Fontaine and Pat Hartley and released in 1982, which tells a story not of the dead but of those who lived to see many of the gains of the movement undone by an increasingly punitive criminal-justice system and the rise of Reaganism. (The Harvard Film Archive is restoring the documentary for a digital release early next year.)

  • When June Jordan and Buckminster Fuller Tried to Redesign Harlem | The New Yorker
    https://www.newyorker.com/culture/culture-desk/when-june-jordan-and-buckminster-fuller-tried-to-redesign-harlem

    The uprisings coincided with a turbulent period in Jordan’s life. A week after the riots, Jordan’s husband wrote to say that he wouldn’t be returning to their home; Jordan, increasingly destitute, sent her son to his grandparents. She wrote to Fuller, he responded almost immediately, and they spent several months drafting “Skyrise for Harlem,” a plan for a neighborhood where residents had long been subjected to constant policing, cramped quarters, and dilapidated schools. Their plan would transform Harlem without displacing any of its existing residents, who often became the collateral damage of “urban renewal” (or what Jordan and others called “Negro removal”). Urban renewal involves the designation of certain areas as “blight”—a term disproportionately applied to low-income Black and brown communities—in order to justify demolition of existing structures and authorize new building. The practice was exemplified by Robert Moses, whose now-infamous Cross Bronx Expressway, for example, relied on denying the rich cultural networks and microeconomies of East Tremont, which were then destroyed by the highway’s construction.

    In contrast with urban-renewal projects that devalued Black and brown populations, Fuller and Jordan’s design sought to transform the environment in service of Harlem’s residents. The plan was ambitious, but drastic measures were required. “Partial renovation is not enough,” Jordan wrote. “A half century of despair requires exorcism.” Columns installed in backyards would act as stilts so that construction of fifteen fireproof, conical towers could take place above existing buildings. These towers would contain new dwelling space—light-filled apartments of twelve-hundred square feet, each equipped with a balcony and parking spot—as well as studios, concert halls, theatres, athletic fields, and recreational space. Parking ramps and suspension bridges would cut through the towers, and green space and collective leisure areas would be expanded. After construction was completed, the residents who lived in the buildings below would simply move up to the improved units. After residents had settled into their new units, the old units would be “converted into communal, open space for recreation, parking and so forth.”

    Under her married name, Meyer, Jordan wrote about “Skyrise for Harlem” in the April, 1965, issue of Esquire. Jordan chaffed against Esquire’s stipulations. “The limitation of 2500 words seems to me arbitrary and acceptable only if it becomes possible to adequately condense to a poetry of form the verbal aspect of the piece,” she wrote to Fuller, emphasizing the project’s allegiance to radical imagination. In the article, Jordan omitted her own integral role in the project—perhaps to seize on the celebrity of her collaborator, who had appeared on the cover of Time the previous year. “Fuller’s design,” “Fuller’s circular decked towers,” “Fuller’s solution,” Jordan wrote. Still, their shared enthusiasm for the transformative potential of design comes through: “There is no evading architecture, no meaningful denial of our position. You can build to defend the endurance of man, to protect his existence, to illuminate it. But you cannot build for these purposes merely in spasmodic response to past and present crises, for then crisis, like the poor, will be with us always.”

    Jordan submitted the article under the headline “Skyrise for Harlem,” but the editors replaced it with one of their own, “Instant Slum Clearance,” which encapsulated precisely the dominant urban-planning idea that Jordan and Fuller’s design rejected: that Black residents were a form of contamination who had to be removed for a neighborhood to flourish. The subtitle—“R. Buckminster Fuller designs a total solution to an American dilemma: here, for instance, is how it would work in Harlem”—clinched the project’s attribution to Fuller and reversed Jordan’s guiding question, “What was I for?” It cast the plan from one motivated by the love of a particular place into one preoccupied with a generalized violence.

    The check from Esquire arrived on December 24, 1964. “I pleaded with the bank to cash the check, immediately,” Jordan recalled. Then she headed to the airport to pick up her son, who made it home just in time for Christmas.

    “Skyrise for Harlem” never made it off the page. Although Jordan insisted that the pair fully expected the plan to be carried out, its fate was hardly an anomaly for Fuller, whose spectacular ideas regularly outpaced his commitment to seeing them through. Unlike many of Fuller’s other brainstorms, however, engagements with “Skyrise” have been scattershot. A few sources have covered the project, without giving credit to Jordan. A 1965 article in the Southern Illinoisan, Fuller’s local newspaper, described the proposal, giving sole credit to Fuller. The Whitney’s exhibition “Buckminster Fuller: Starting with the Universe,” from 2008, included the blueprint by Fuller’s associate Shoji Sadao that appeared alongside the Esquire article, with no mention of Jordan. Jordan wrote about the project’s genesis and her frustration with Esquire’s editorial changes in “Civil Wars.”

    #Planification_urbaine #Harlem #1964

  • The Second Act of Social-Media Activism | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism

    Un article passionnant qui part des analyses de Zeynep Tufekci pour les reconsidérer à partir des mouvements plus récents.

    Some of this story may seem familiar. In “Twitter and Tear Gas: The Power and Fragility of Networked Protest,” from 2017, the sociologist Zeynep Tufekci examined how a “digitally networked public sphere” had come to shape social movements. Tufekci drew on her own experience of the 2011 Arab uprisings, whose early mobilization of social media set the stage for the protests at Gezi Park, in Istanbul, the Occupy action, in New York City, and the Black Lives Matter movement, in Ferguson. For Tufekci, the use of the Internet linked these various, decentralized uprisings and distinguished them from predecessors such as the nineteen-sixties civil-rights movement. Whereas “older movements had to build their organizing capacity first,” Tufekci argued, “modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first protest or march.”

    The speed afforded by such protest is, however, as much its peril as its promise. After a swift expansion, spontaneous movements are often prone to what Tufekci calls “tactical freezes.” Because they are often leaderless, and can lack “both the culture and the infrastructure for making collective decisions,” they are left with little room to adjust strategies or negotiate demands. At a more fundamental level, social media’s corporate infrastructure makes such movements vulnerable to coöptation and censorship. Tufekci is clear-eyed about these pitfalls, even as she rejects the broader criticisms of “slacktivism” laid out, for example, by Evgeny Morozov’s “The Net Delusion,” from 2011.

    “Twitter and Tear Gas” remains trenchant about how social media can and cannot enact reform. But movements change, as does technology. Since Tufekci’s book was published, social media has helped represent—and, in some cases, helped organize—the Arab Spring 2.0, France’s “Yellow Vest” movement, Puerto Rico’s RickyLeaks, the 2019 Iranian protests, the Hong Kong protests, and what we might call the B.L.M. uprising of 2020. This last event, still ongoing, has evinced a scale, creativity, and endurance that challenges those skeptical of the Internet’s ability to mediate a movement. As Tufekci notes in her book, the real-world effects of Occupy, the Women’s March, and even Ferguson-era B.L.M. were often underwhelming. By contrast, since George Floyd’s death, cities have cut billions of dollars from police budgets; school districts have severed ties with police; multiple police-reform-and-accountability bills have been introduced in Congress; and cities like Minneapolis have vowed to defund policing. Plenty of work remains, but the link between activism, the Internet, and material action seems to have deepened. What’s changed?

    The current uprisings slot neatly into Tufekci’s story, with one exception. As the flurry of digital activism continues, there is no sense that this movement is unclear about its aims—abolition—or that it might collapse under a tactical freeze. Instead, the many protest guides, syllabi, Webinars, and the like have made clear both the objectives of abolition and the digital savvy of abolitionists. It is a message so legible that even Fox News grasped it with relative ease. Rachel Kuo, an organizer and scholar of digital activism, told me that this clarity has been shaped partly by organizers who increasingly rely on “a combination of digital platforms, whether that’s Google Drive, Signal, Messenger, Slack, or other combinations of software, for collaboration, information storage, resource access, and daily communications.” The public tends to focus, understandably, on the profusion of hashtags and sleek graphics, but Kuo stressed that it was this “back end” work—an inventory of knowledge, a stronger sense of alliance—that has allowed digital activism to “reflect broader concerns and visions around community safety, accessibility, and accountability.” The uprisings might have unfolded organically, but what has sustained them is precisely what many prior networked protests lacked: preëxisting organizations with specific demands for a better world.

    What’s distinct about the current movement is not just the clarity of its messaging, but its ability to convey that message through so much noise. On June 2nd, the music industry launched #BlackoutTuesday, an action against police brutality that involved, among other things, Instagram and Facebook users posting plain black boxes to their accounts. The posts often included the hashtag #BlackLivesMatter; almost immediately, social-media users were inundated with even more posts, which explained why using that hashtag drowned out crucial information about events and resources with a sea of mute boxes. For Meredith Clark, a media-studies professor at the University of Virginia, the response illustrated how the B.L.M. movement had honed its ability to stick to a program, and to correct those who deployed that program naïvely. In 2014, many people had only a thin sense of how a hashtag could organize actions or establish circles of care. Today, “people understand what it means to use a hashtag,” Clark told me. They use “their own social media in a certain way to essentially quiet background noise” and “allow those voices that need to connect with each other the space to do so.” The #BlackoutTuesday affair exemplified an increasing awareness of how digital tactics have material consequences.

    These networks suggest that digital activism has entered a second act, in which the tools of the Internet have been increasingly integrated into the hard-won structure of older movements. Though, as networked protest grows in scale and popularity, it still risks being hijacked by the mainstream. Any urgent circulation of information—the same memes filtering through your Instagram stories, the same looping images retweeted into your timeline—can be numbing, and any shift in the Overton window means that hegemony drifts with it.

    In “Twitter and Tear Gas,” Tufekci wrote, “The Black Lives Matter movement is young, and how it will develop further capacities remains to be seen.” The movement is older now. It has developed its tactics, its messaging, its reach—but perhaps its most striking new capacity is a sharper recognition of social media’s limits. “This movement has mastered what social media is good for,” Deva Woodly, a professor of politics at the New School, told me. “And that’s basically the meme: it’s the headline.” Those memes, Woodley said, help “codify the message” that leads to broader, deeper conversations offline, which, in turn, build on a long history of radical pedagogy. As more and more of us join those conversations, prompted by the words and images we see on our screens, it’s clear that the revolution will not be tweeted—at least, not entirely.

    #Activisme_connecté #Black_lives_matter #Zeynep_Tufekci #Mèmes #Hashtag_movments #Médias_sociaux

  • The Walkman, Forty Years On | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-walkman-forty-years-on

    Even prior to extended quarantines, lockdowns, and self-isolation, it was hard to imagine life without the electronic escapes of noise-cancelling earbuds, smartphones, and tablets. Today, it seems impossible. Of course, there was most certainly a before and after, a point around which the cultural gravity of our plugged-in-yet-tuned-out modern lives shifted. Its name is Walkman, and it was invented, in Japan, in 1979. After the Walkman arrived on American shores, in June of 1980, under the temporary name of Soundabout, our days would never be the same.

    Up to this point, music was primarily a shared experience: families huddling around furniture-sized Philcos; teens blasting tunes from automobiles or sock-hopping to transistor radios; the bar-room juke; break-dancers popping and locking to the sonic backdrop of a boom box. After the Walkman, music could be silence to all but the listener, cocooned within a personal soundscape, which spooled on analog cassette tape. The effect was shocking even to its creators. “Everyone knows what headphones sound like today,” the late Sony designer Yasuo Kuroki wrote in a Japanese-language memoir, from 1990. “But at the time, you couldn’t even imagine it, and then suddenly Beethoven’s Fifth is hammering between your ears.”

    Sony’s chairman at the time, the genial Akio Morita, was so unsure of the device’s prospects that he ordered a manufacturing run of only thirty thousand, a drop in the bucket compared to such established lines as Trinitron televisions. Initially, he seemed right to be cautious. The Walkman débuted in Japan to near silence. But word quickly spread among the youth of Tokyo about a strange new device that let you carry a soundtrack out of your bedroom, onto commuter trains, and into city streets. Within a year and a half of the appearance of the Walkman, Sony would produce and sell two million of them.

    for the Walkman’s growing numbers of users, isolation was the whole point. “With the advent of the Sony Walkman came the end of meeting people,” Susan Blond, a vice-president at CBS Records, told the Washington Post in 1981. “It’s like a drug: You put the Walkman on and you blot out the rest of the world.” It didn’t take long for academics to coin a term for the phenomenon. The musicologist Shuhei Hosokawa called it “the Walkman effect.”

    There had been popular electronic gadgets before, such as the pocket-sized transistor radios of the fifties, sixties, and seventies. But the Walkman was in another league. Until this point, earphones had been associated with hearing impairment, geeky technicians manning sonar stations, or basement-dwelling hi-fi fanatics. Somehow, a Japanese company had made the high-tech headgear cool.

    “Steve’s point of reference was Sony at the time,” his successor at Apple, John Sculley, recalled. “He really wanted to be Sony. He didn’t want to be IBM. He didn’t want to be Microsoft. He wanted to be Sony.”

    Jobs would get his wish with the début of the iPod, in 2001. It wasn’t the first digital-music player—a South Korean firm had introduced one back in 1998. (That Sony failed to exploit the niche, in spite of having created listening-on-the-go and even owning its own record label, was a testament to how Morita’s unexpected retirement after a stroke, in 1993, hobbled the corporation.) But Apple’s was the most stylish to date, bereft of the complicated and button-festooned interfaces of its competitors, finished in sleek pearlescent plastic and with a satisfying heft that hinted at powerful technologies churning inside. Apple also introduced a tantalizing new method of serving up music: the shuffle, which let listeners remix entire musical libraries into never-ending audio backdrops for their lives. Once again, city streets were the proving ground for this evolution of portable listening technology. “I was on Madison [Ave],” Jobs told Newsweek, in 2004, “and it was, like, on every block, there was someone with white headphones, and I thought, ‘Oh, my God, it’s starting to happen.’ ”

    #Walkman #Sony #Steve_Jobs #Musique #Isolement

  • Bob Dylan’s “Rough and Rowdy Ways” Hits Hard | The New Yorker
    https://www.newyorker.com/culture/culture-desk/bob-dylans-rough-and-rowdy-ways-hits-hard

    few weeks into quarantine, time became liquid. All the usual markers and routines—waking up and lurching down the block to buy a cup of coffee, dressing carefully for a work meeting, corralling friends for karaoke on a Sunday afternoon—were nullified, and the days assumed a soft, amorphous quality. Then, at midnight on a Friday, Bob Dylan suddenly released “Murder Most Foul,” an elegiac, thickset, nearly seventeen-minute song ostensibly about the assassination of J.F.K., but so laden with cultural allusions that it somehow felt even bigger than that. It was the first piece of original music Dylan had released since his album “Tempest,” in 2012, and, on first listen, I found the song surreal. It went on forever; it was over before I knew it. The instrumentation (piano, bowed bass, faint percussion) is hazy and diffuse. Dylan’s vocal phrasing, always careful, felt particularly mesmeric. Rub-a-dub-dub, Altamont, Deep Ellum, Patsy Cline, Air Force One, Thelonious Monk, Bugsy Siegel, Pretty Boy Floyd. What day was it? What year?

    Two months later, “Murder Most Foul” hits different: “We’re gonna kill you with hatred / Without any respect / We’ll mock you and shock you / And we’ll put it in your face,” Dylan sings in the song’s first verse. His voice is withering. “It’s a Murder. Most. Foul.” Dylan has spent decades seeing and chronicling American injustice. Forty-four years ago, on “Hurricane,” he sang frankly about police brutality: “If you’re black, you might as well not show up on the street / ’Less you want to draw the heat.”

    This week, Dylan will release “Rough and Rowdy Ways,” a gruesome, crowded, marauding album that feels unusually attuned to its moment. Unlike many artists who reacted to the pandemic with a kind of dutiful tenderness—“Let me help with my song!”—Dylan has decided not to offer comfort, nor to hint at some vague solidarity. Lyrically, he’s either cracking weird jokes (“I’ll take the ‘Scarface’ Pacino and the ‘Godfather’ Brando / Mix ’em up in a tank and get a robot commando”) or operating in a cold, disdainful, it-ain’t-me-babe mode. Dylan’s musicianship is often undersold by critics, but on “Rough and Rowdy Ways” it’s especially difficult to focus on anything other than his voice; at seventy-nine, he sounds warmed up and self-assured. There are moments when he appears to be chewing on his own mortality—he recently told the Times that he thinks about death “in general terms, not in a personal way”—but mostly he sounds elegant and steady, a vocal grace he might have acquired while recording all those standards. “Three miles north of Purgatory, one step from the great beyond,” he sings calmly on “Crossing the Rubicon.”
    Video From The New Yorker
    Janelle Monáe on Growing Up Queer and Black

    It’s sometimes hard to think of Dylan doing normal, vulnerable things like falling in love, though he sings about heartache—his compulsion toward it, his indulgence of its wounds—constantly. My favorite track on “Rough and Rowdy Ways” is “I’ve Made Up My Mind to Give Myself to You,” a gentle ballad about deliberately resigning oneself to love and its demands. It’s not the album’s richest or most complicated song—“Key West (Philosopher Pirate)” is Shakespearean—but I’ve been listening to it constantly, mostly for its evocation of a certain kind of golden-hour melancholy. Imagine sitting on a porch or on the front steps of an apartment building, nursing a big drink in a stupid glass, and reluctantly accepting your fate: “Been thinking it all over / And I thought it all through / I’ve made up my mind / To give myself to you.” It’s not quite romantic, but, then again, neither is love. The song’s emotional climax comes less than halfway through, when Dylan announces, “From Salt Lake City to Birmingham / From East L.A. to San Antone / I don’t think I could bear to live my life alone!” Ever so briefly, his voice goes feral.

    Dylan is a voracious student of United States history—he can, and often does, itemize the various atrocities that have been committed in service to country—and “Rough and Rowdy Ways” could be understood as a glib summation of America’s outlaw origins, and of the confused, dangerous, and often haphazard way that we preserve democracy. He seems to understand instinctively that American history is not a series of fixed points but an unmoored and constantly evolving idea that needs to be reëstablished each day—things don’t happen once and then stop happening. In this sense, linear time becomes an invention; every moment is this moment. This is why, on “Murder Most Foul,” Buster Keaton and Dickey Betts and the Tulsa race massacre of 1921 and Lindsey Buckingham and Stevie Nicks and the Birdman of Alcatraz can coexist, harmoniously, in a single verse. That Dylan named another dense, allusive song on the album, “I Contain Multitudes,” after a much-quoted stanza from Walt Whitman’s “Song of Myself”—“Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.)”—also seems to indicate some reckoning with the vastness and immediacy of American culture. (Dylan’s interests are so wonderfully obtuse and far-ranging that it’s sometimes hard to discern precisely what he’s referring to: Is the “Cry Me a River” that he mentions on “Murder Most Foul” a reference to the jazz standard made famous by the actress Julie London, in 1955, or to the dark, cluttered revenge jam that Justin Timberlake supposedly wrote about Britney Spears, in 2002? My money is on the latter.)

    Now thirty-nine albums in, it’s tempting to dismiss Dylan as sepia-toned—a professor emeritus, a museum piece, a Nobel laureate coasting through his sunset years, the mouthpiece of some bygone generation but certainly not this one. (It’s hard, admittedly, to imagine bars of “I Contain Multitudes” finding viral purchase on TikTok.) The sheer volume of writing about his life and music suggests a completed arc, which makes it easy to presume that there’s nothing useful, interesting, or pertinent left to say. Yet, for me, Dylan’s vast and intersectional understanding of the American mythos feels so plainly and uniquely relevant to the grimness and magnitude of these past few months. As the country attempts to metabolize the murder of George Floyd, it is also attempting to reckon with every crooked, brutal, odious, or unjust murder of a black person—to understand a cycle that began centuries ago and somehow continues apace. What is American racism? It’s everything, Dylan insists. Indiana Jones and J.F.K. and Elvis Presley and Jimmy Reed—nothing exists without the rest of it. None of us are absolved, and none of us are spared.
    Amanda Petrusich is a staff writer at The New Yorker and the author of “Do Not Sell at Any Price: The Wild, Obsessive Hunt for the World’s Rarest 78rpm Records.”

    #Bob_Dylan #Music

    • Drôle d’interview qui ressemble plus à une discussion entre copines, la jeune journaliste et la vieille féministe gauchiste (maoïste, dit-elle). Les questions sont presque plus intéressantes que les réponses. Il est question de mayonnaise et de #Marie_Kondo dont #Barbara_Ehrenreich a critiqué le mauvais anglais.

      Well, I think what I said was really stupid—ill considered and written quickly and I was mortified. Some editor had asked me to write something about Marie Kondo, so I watched part of her show on Netflix, and I was appalled. I hope that’s not intrinsically bad. I’ll admit something to you—one thing that was also going on was that my mother would just throw all my clothes out of the chest of drawers and onto the floor when she thought things were messy. Something about that got triggered with Marie Kondo and I felt this sort of rage, not that that’s an excuse or anything.

      #Rebecca_Solnit aussi.

      Bizarre !

  • The Faces of a New Union Movement | The New Yorker
    https://www.newyorker.com/culture/photo-booth/the-faces-of-a-new-union-movement

    Haag is part of a wave of young workers who have been unionizing in sectors with little or no tradition of unions: art museums, including the Guggenheim and the New Museum, but also tech companies, digital-media brands, political campaigns, even cannabis shops. At Google, around ninety contract workers in Pittsburgh recently formed a union—a significant breakthrough, even if they represent just a tiny fraction of the company’s workforce. More than thirty digital publications, including Vox, Vice, Salon, Slate, and HuffPost, have unionized. (The editorial staff of The New Yorker unionized in 2018.) Last March, Bernie Sanders’s campaign became the first major-party Presidential campaign in history with a unionized workforce; the campaigns of Eric Swalwell, Julián Castro, and Elizabeth Warren unionized soon after. At Grinnell College, in Iowa, students working in the school’s dining hall unionized in 2016, becoming one of the nation’s only undergraduate-student labor unions. Sam Xu, the union’s twenty-one-year-old former president, said, “Mark Zuckerberg was running Facebook out of his dorm room. I’m running a union out of my dorm room.”

    The American labor movement has been reinvigorated in recent years, with the teacher-led Red for Ed strikes, the General Motors walkout, and the Fight for $15’s push to raise the minimum wage. A Gallup poll last summer found that sixty-four per cent of Americans approve of unions—one of the highest ratings recorded in the past fifty years. The highest rate of approval came from young people: sixty-seven per cent among eighteen-to-thirty-four-year-olds. Rebecca Givan, an associate professor of labor studies at Rutgers University, said that many young people are interested in joining unions because they’re “feeling the pinch”—many “have a tremendous amount of student debt, and, if they’re living in cities, they’re struggling to afford housing.” Givan added that many feel considerable insecurity about their jobs. “The industries that they’re organizing in are volatile,” she said. Jake Rosenfeld, an associate professor of sociology at Washington University, said, “Underemployed college-educated workers aren’t buying what was until recently the prevailing understanding of our economy: that hard work and a college degree was a ticket to a stable, well-paying job.”

    #Syndicats #Gig_economy

  • A Day of Reckoning for Michael Jackson with “Leaving Neverland” | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/a-day-of-reckoning-for-michael-jackson-with-leaving-neverland

    It is hideous, but true, that allegations of this sort have historically been treated differently when the accused is a virtuosic and deeply beloved male performer: Miles Davis allegedly beat his wives; Jimmy Page allegedly had a relationship with a fourteen-year-old girl; the late rapper XXXTentacion allegedly battered his ex-girlfriend when she was pregnant; Chuck Berry was convicted of transporting a minor across state lines for “immoral purposes”; and on, and on, and on, until the entire history of Western music collapses in a haze of abuse and transgression, unable to survive any sort of moral dragnet

  • Bond Touch Bracelets and the New Frontiers of Digital Dating | The New Yorker
    https://www.newyorker.com/culture/culture-desk/bond-touch-bracelets-and-the-new-frontiers-of-digital-dating

    Few things feel as fraught, in the modern age, as the long-distance relationship. The hazards of digital romance have been well chronicled, perhaps most prominently in the documentary and subsequent TV series “Catfish,” which exposed viewers to a new and expansive genre of horror. To “catfish” someone, in common parlance, is to meet a person online through dating apps, social-media sites, or chat rooms, and to seduce them using fake photos and fictional biographical details. On the reality-TV version of “Catfish,” lovesick victims confront those who deceived them, in grim, emotional scenes of revelation and heartbreak. Throw teens into the mix, and the narrative can turn even more ghastly. One thinks of the tabloid story of Michelle Carter and her boyfriend, Conrad Roy III, two teen-agers whose relationship developed mostly over text and Facebook message. In 2017, Carter was convicted of involuntary manslaughter for encouraging Roy to kill himself—even though the pair had met only a handful of times. Messages between the couple revealed the kind of twisted emotional dynamic that can emerge in the absence of physical proximity.

    Despite these stories, digital-first (and digital-only) relationships continue to thrive. With online dating now a fact of life, a new bogeyman, virtual-reality dating, has taken its place, threatening to cut the final cord between romance and the real world. The platform VRLFP—Virtual Reality Looking For Partner—advertises itself as the perfect solution for daters who’d rather not deal with the hassles of Tinder flirting or late-night bar crawls. (“Grab a coffee, visit an amusement park, or go to the moon without leaving your home and without spending a dime,” the VRLFP site reads. “VR makes long-distance relationships work.”) This is to say nothing of the companies designing humanoid sex robots, or the scientists designing phone cases that feel like human flesh.

    Perhaps the most innocuous entry in the digital-dating marketplace is a new product called Bond Touch, a set of electronic bracelets meant for long-distance daters. (Shawn Mendes and Camila Cabello, one of the most P.D.A.-fluent couples of our time, were recently spotted wearing the bracelets.) Unlike the cold fantasias of VR courtship, Bond Touch bracelets are fundamentally wholesome, and they reduce long-distance relationships to a series of mundane concerns. How can you sustain a healthy amount of communication with a long-distance partner? How can you feel close to someone who’s physically distant? And how do you simulate the wordless gestures of affection that account for so much of personal connection? Created in Silicon Valley by a developer named Christoph Dressel—who is also the C.O.O. of an environmentally minded technology firm called Impossible—the bracelets are slim, chic devices that resemble Fitbits. By wearing one, a person can send a tap that generates a light vibration and a colored blink on the screen of a partner’s bracelet. The bracelets are also linked through an app that provides information about a partner’s weather and time zone, but their primary function is to embody presence. Like Facebook’s early “Poke” feature, they impart the same message as a shoulder squeeze or a gaze across the room at a party: “I’m here, and I’m thinking about you.”

    In theory, the bracelets could service any form of long-distance relationship—military members and their families, partners separated by jobs or school, siblings living in different cities—but they seem to be most popular among teen-agers who’ve forged romantic relationships online. Bond Touch is a hot topic of discussion in certain corners of YouTube and Reddit, where users provide excessively detailed reviews of their bracelet-wearing experience. These users seem less concerned with simulating touch or affection than with communicating when they don’t have access to their phone, namely during class or at part-time jobs. They often develop Morse-code-like systems to lend layers of meaning to their taps. “When I really want his attention, I just send a very long one, and then he’s, like, ‘What do you want?’ . . . Three taps means ‘I love you,’ ” one YouTuber, HeyItsTay, explains, in a video that’s garnered over 1.8 million views. Safety is also a chief concern: almost all of the vloggers explain that Bond Touch is an effective way of letting someone know that you’re O.K., even if you’re not responding to text messages or Instagram DMs.

    Something like a Bond Touch bracelet ostensibly solves a communication problem, but it also creates one—the problem of over-availability, in which no one can be unreachable and no sentiment goes unexpressed. (One can imagine the anxieties that might arise from a set of unanswered taps, and the bracelets have already inspired plenty of off-label uses. “Great way for cheating in class,” one user commented on HeyItsTay’s Bond Touch video.) Not all technology is corrosive, of course, but there is something disheartening about a relationship wherein digital bracelets are meant to replace the rhythms of conversation and the ebbs and flows of emotional connection. The problem has less to do with the bracelets themselves than with the trend that they advance. In lieu of facetime, we seem willing to accept even the most basic forms of emotional stimulus, no matter how paltry a substitute they present.

    Reading about Bond Touch, an episode of the 2019 breakout comedy “PEN15” came to mind. The show is set in the era of the dial-up connection, and at one point its main characters, the awkward middle schoolers Anna and Maya, experiment with AOL Instant Messenger. Maya meets a guy named “Flymiamibro22” in a chat room, and their conversation quickly sparks an infatuation—and, eventually, something resembling love. “I love you more than I love my own DAD!” Maya tells Flymiamibro22 in a violent flurry of messages. Flymiamibro22 is a self-described “gym rat,” but in reality he’s one of Maya’s classmates and friends, Sam, posing online as an older guy. At the peak of her obsession, Maya begs her crush to meet her in person, and they arrange a date at a local bowling alley. FlyMiamiBro never materializes, but Sam reveals his true identity soon after, at a school dance. This admission produces a rush of fury and humiliation. But it also, finally, leads to catharsis, the growth and wisdom that flows from a confrontation with reality. That sort of confrontation seems increasingly avoidable today.

    Carrie Battan began contributing to The New Yorker in 2015 and became a staff writer in 2018.

    #Pratiques_numériques #Sites_rencontre #Dating #Bracelet #Culture_numérique

  • When the Beatles Walked Offstage: Fifty Years of “Abbey Road” | The New Yorker
    https://www.newyorker.com/culture/culture-desk/when-the-beatles-walked-offstage-fifty-years-of-abbey-road

    Excellent article sur le plus grand album de la pop musique.

    In the spring of 1969, Paul McCartney telephoned George Martin to ask if he would be willing to work with the Beatles on a new album they planned to record in the months ahead. Martin, who was widely regarded as the most accomplished pop-record producer in the world, had overseen the making of all nine albums and nineteen singles that the Beatles had released in Britain since their début on E.M.I.’s Parlophone label, in 1962. His reputation was synonymous with that of the group, and the fact that McCartney felt a need to ask him about his availability dramatized how much the Beatles’ professional circumstances had changed since the release of the two-record set known as the White Album, in the fall of 1968. In Martin’s view, the five months of tension and drama it took to make that album, followed by the fiasco of “Get Back,” an ill-fated film, concert, and recording project that ended inconclusively in January, 1969, had turned his recent work with the Beatles into a “miserable experience.”

    “After [‘Get Back’] I thought it was the end of the road for all of us,” he said later. “I didn’t really want to work with them anymore because they were becoming unpleasant people, to themselves as well as to other people. So I was quite surprised when Paul rang me up and asked me to produce another record for them. He said, ‘Will you really produce it?’ And I said, ‘If I’m really allowed to produce it. If I have to go back and accept a lot of instructions that I don’t like, then I won’t do it.’ ” After receiving McCartney’s assurance that he would indeed have a free hand, Martin booked a solid block of time at Abbey Road studios from the first of July to the end of August.

    To speak of “sides” is to acknowledge that “Abbey Road,” like most Beatles albums, was originally released as a double-sided vinyl LP. This was the format with which the group had revolutionized the recording industry in the sixties, when its popularity, self-sufficiency, and burgeoning artistic ambition helped to establish the self-written album as the principal medium of rock. Earlier, in the fifties, when “long-playing” records first became available, their selling point was their capacity. Unlike the 78-r.p.m. records they replaced, LPs could hold more than twenty minutes of music per side, which made them an ideal format for the extended performances of classical music, Broadway shows, film soundtracks, modern jazz, and standup comedy that accounted for the lion’s share of the record market at the time. Best-selling pop singers like Frank Sinatra, Harry Belafonte, and Elvis Presley also capitalized on the potential of the LP, not least because a prime virtue of albums in the pop market was their packaging. The records were sold in foot-square cardboard sleeves, faced with a photograph or illustration that served as an advertisement for the product within. By providing a portrait of the artist and a platform for the sort of promotional copy that had previously been confined to fan magazines, album “jackets” served as a tangible accessory to the experience of record listening. LP covers became an established form of graphic art, and the high standard of the graphic design on the Beatles’ early albums was one of the ways that Brian Epstein and George Martin sought to distinguish the group from the patronizing stereotypes that applied to teen-age pop.

    All of this, it goes without saying, is ancient history in an era of digital streaming and shuffling, which threatens the very concept of a record album as a cohesive work of art. In this sense, the fiftieth anniversary reissue of “Abbey Road” is an anachronism, a throwback to a time when an LP cover could serve as a cultural icon and the order of the songs on the two sides of an album became etched on its listeners’ minds. In the iconography of Beatles album covers, “Abbey Road” ranks with the conclave of culture heroes on the front of “Sgt. Pepper” and the mysterious side-lit portrait on the group’s first Capitol LP. Yet, like so much else on the album, its cover was a product of compromise. After entertaining the notion of naming the album “Everest” and travelling to Nepal to have themselves photographed in front of the world’s tallest peak, the Beatles elected to simply walk out the door of the studio on an August afternoon. The famous tableau of the four of them striding purposefully across the now-landmarked “zebra crossing”—Lennon in white, Starr in black, McCartney in gray, and Harrison in hippie denim from head to toe—advertised the differences in a band that had first captured the attention of the world in matching suits and haircuts. But its iconic status owed to the way it came to serve, in retrospect, as a typically droll image of the Beatles, walking off the stage of their career as a group.

    To return to Ned Rorem’s formulation: How good were the Beatles, notwithstanding the fact that everyone knew they were good? Good enough to produce this self-allusive masterpiece with their dying breath as a band. Good enough to enlist the smoke and mirrors of a modern recording studio to simulate the merger of musical sensibilities that they had once achieved by means of an unprecedented concentration and collaboration of sovereign talent. In this sense, “Abbey Road” memorializes a paradox of the group. The singing, songwriting, and playing on the album affirm the extent to which all four of the Beatles became consummate musical professionals in the course of their eight-year career. But the ending of that career affirms the extent to which these four “mates” from Liverpool, whose lives were transformed by such a surfeit of wealth and fame, never gave a thought to professionalizing their personal relationships with one another.

    Their contemporaries, such as the Rolling Stones and the Who, would carry on for decades as lucrative rock franchises, long after the bonds of adolescent friendship that originally joined them together had withered away. But, for the Beatles, whose adolescent friendship institutionalized the archetype of the rock group, a ubiquitous mode of musical organization that has endured to the present day, the deterioration in their personal relations completely outweighed the financial incentives that came with their status as the most successful musical artists of their time. From the beginning, they were understood to be a “band” in both senses of the word: as musicians, of course, but also, on a more elemental level, as a group of young men who shared a sense of identity, solidarity, and purpose. “I’ve compared it to a marriage,” Lennon would say. “Up until then, we really believed intensely in what we were doing, and the product we put out, and everything had to be just right. Suddenly we didn’t believe. And that was the end of it.”

    #Musique #The_Beatles #Abbey_Road #Vinyls

  • Ric Ocasek’s Eternal Cool | The New Yorker
    https://www.newyorker.com/culture/postscript/ric-ocaseks-eternal-cool

    Ocasek sang most of their other hits. The Cars combined the pleasures of New Wave synth modernity with the pleasures of bar-band guitar rock, in a style made especially distinctive by Ocasek’s borderline eerie vocals and aesthetic: starkly bold attire, black shades, black hairdo with a hint of fright wig. As a singer and a presence, Ocasek both channelled powerful emotion and seemed to float above it, as mysteriously as the ever-present sunglasses that obscured the look in his eyes. The Cars released their self-titled début in 1978; it was an instant classic. (I’m not sure I’ve ever listened to FM radio in my home town without hearing one of its songs in a rock block.) The album’s first track, “Good Times Roll,” is a strangely dispassionate call to revelry: mid-tempo, instructing, cool, hovering aloof above the notion of good times. It begins with spare, locomotive guitar. Ocasek commands us to let the good times roll, knock us around, make us a clown, leave us up in the air—but it doesn’t sound as if he’s going to do these things. Whereas the beloved 1956 Shirley and Lee song “Let the Good Times Roll” feels like a party—an instant get-on-the-dance-floor—the Cars are doing something stranger. Rock and roll is all about good times, but the Cars aren’t going to just lob them at us: instead, Ocasek invokes them for us to engage in, then leans back to watch what we do, like some kind of good-times fetishist.

    His vocals on the album’s other singles retain that weird cool, but they add emotions we can detect, even feel. “My Best Friend’s Girl” begins with penetrating guitar, hand claps, and vocals, but then plunges into friendly pop and gang’s-all-here backup singing. When Ocasek sings “She’s dancing ’neath the starry sky” and adds, “She’s my best friend’s girl / and she used to be mine,” it hurts, sweetly, and we begin to understand him as a human.

    Since I learned of Ocasek’s death, I’ve been pondering the nature of the Cars’ particular sound, and how, early on, they differed from their fellow New Wave artists and synth enthusiasts. For one thing, they employed the sounds of modernity and machinery without being woo-woo about it; they weren’t art rock à la Bowie and Brian Eno, or Kraftwerk, or Joy Division. Today, I saw that, in 1978, Ocasek, when asked by the Globe about rumors that the Cars had sought production by Eno, said, “No, we have enough oblique strategy already. If we had any more, we’d be on a space capsule headed for Mars.” They didn’t want Mars—they wanted to go their own way, unique and on the ground

    .
    #Musique #Ric_Ocasek #The_Cars

  • Un texte de l’écrivain #Jonathan_Franzen, qui fait beaucoup jaser... à croire que la collapsologie a mis plus de temps à rejoindre les grand médias aux États-Unis :

    What If We Stopped Pretending ?
    Jonathan Franzen, The New-Yorker, le 8 septembre 2019
    https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending

    On l’ajoute à la troisième compilation :
    https://seenthis.net/messages/680147

    #effondrement #collapsologie #catastrophe #fin_du_monde #it_has_begun #Anthropocène #capitalocène #USA

    Mais aussi aux évaluations et critiques des #actions_individuelles compilées ici :
    https://seenthis.net/messages/794181

    Semi #paywall alors :

    “There is infinite hope,” Kafka tells us, “only not for us.” This is a fittingly mystical epigram from a writer whose characters strive for ostensibly reachable goals and, tragically or amusingly, never manage to get any closer to them. But it seems to me, in our rapidly darkening world, that the converse of Kafka’s quip is equally true: There is no hope, except for us.

    I’m talking, of course, about climate change. The struggle to rein in global carbon emissions and keep the planet from melting down has the feel of Kafka’s fiction. The goal has been clear for thirty years, and despite earnest efforts we’ve made essentially no progress toward reaching it. Today, the scientific evidence verges on irrefutable. If you’re younger than sixty, you have a good chance of witnessing the radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought. If you’re under thirty, you’re all but guaranteed to witness it.

    If you care about the planet, and about the people and animals who live on it, there are two ways to think about this. You can keep on hoping that catastrophe is preventable, and feel ever more frustrated or enraged by the world’s inaction. Or you can accept that disaster is coming, and begin to rethink what it means to have hope.

    Even at this late date, expressions of unrealistic hope continue to abound. Hardly a day seems to pass without my reading that it’s time to “roll up our sleeves” and “save the planet”; that the problem of climate change can be “solved” if we summon the collective will. Although this message was probably still true in 1988, when the science became fully clear, we’ve emitted as much atmospheric carbon in the past thirty years as we did in the previous two centuries of industrialization. The facts have changed, but somehow the message stays the same.

    Psychologically, this denial makes sense. Despite the outrageous fact that I’ll soon be dead forever, I live in the present, not the future. Given a choice between an alarming abstraction (death) and the reassuring evidence of my senses (breakfast!), my mind prefers to focus on the latter. The planet, too, is still marvelously intact, still basically normal—seasons changing, another election year coming, new comedies on Netflix—and its impending collapse is even harder to wrap my mind around than death. Other kinds of apocalypse, whether religious or thermonuclear or asteroidal, at least have the binary neatness of dying: one moment the world is there, the next moment it’s gone forever. Climate apocalypse, by contrast, is messy. It will take the form of increasingly severe crises compounding chaotically until civilization begins to fray. Things will get very bad, but maybe not too soon, and maybe not for everyone. Maybe not for me.

    Some of the denial, however, is more willful. The evil of the Republican Party’s position on climate science is well known, but denial is entrenched in progressive politics, too, or at least in its rhetoric. The Green New Deal, the blueprint for some of the most substantial proposals put forth on the issue, is still framed as our last chance to avert catastrophe and save the planet, by way of gargantuan renewable-energy projects. Many of the groups that support those proposals deploy the language of “stopping” climate change, or imply that there’s still time to prevent it. Unlike the political right, the left prides itself on listening to climate scientists, who do indeed allow that catastrophe is theoretically avertable. But not everyone seems to be listening carefully. The stress falls on the word theoretically.

    Our atmosphere and oceans can absorb only so much heat before climate change, intensified by various feedback loops, spins completely out of control. The consensus among scientists and policy-makers is that we’ll pass this point of no return if the global mean temperature rises by more than two degrees Celsius (maybe a little more, but also maybe a little less). The I.P.C.C.—the Intergovernmental Panel on Climate Change—tells us that, to limit the rise to less than two degrees, we not only need to reverse the trend of the past three decades. We need to approach zero net emissions, globally, in the next three decades.

    This is, to say the least, a tall order. It also assumes that you trust the I.P.C.C.’s calculations. New research, described last month in Scientific American, demonstrates that climate scientists, far from exaggerating the threat of climate change, have underestimated its pace and severity. To project the rise in the global mean temperature, scientists rely on complicated atmospheric modelling. They take a host of variables and run them through supercomputers to generate, say, ten thousand different simulations for the coming century, in order to make a “best” prediction of the rise in temperature. When a scientist predicts a rise of two degrees Celsius, she’s merely naming a number about which she’s very confident: the rise will be at least two degrees. The rise might, in fact, be far higher.

    As a non-scientist, I do my own kind of modelling. I run various future scenarios through my brain, apply the constraints of human psychology and political reality, take note of the relentless rise in global energy consumption (thus far, the carbon savings provided by renewable energy have been more than offset by consumer demand), and count the scenarios in which collective action averts catastrophe. The scenarios, which I draw from the prescriptions of policy-makers and activists, share certain necessary conditions.

    The first condition is that every one of the world’s major polluting countries institute draconian conservation measures, shut down much of its energy and transportation infrastructure, and completely retool its economy. According to a recent paper in Nature, the carbon emissions from existing global infrastructure, if operated through its normal lifetime, will exceed our entire emissions “allowance”—the further gigatons of carbon that can be released without crossing the threshold of catastrophe. (This estimate does not include the thousands of new energy and transportation projects already planned or under construction.) To stay within that allowance, a top-down intervention needs to happen not only in every country but throughout every country. Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks.

    The actions taken by these countries must also be the right ones. Vast sums of government money must be spent without wasting it and without lining the wrong pockets. Here it’s useful to recall the Kafkaesque joke of the European Union’s biofuel mandate, which served to accelerate the deforestation of Indonesia for palm-oil plantations, and the American subsidy of ethanol fuel, which turned out to benefit no one but corn farmers.

    Finally, overwhelming numbers of human beings, including millions of government-hating Americans, need to accept high taxes and severe curtailment of their familiar life styles without revolting. They must accept the reality of climate change and have faith in the extreme measures taken to combat it. They can’t dismiss news they dislike as fake. They have to set aside nationalism and class and racial resentments. They have to make sacrifices for distant threatened nations and distant future generations. They have to be permanently terrified by hotter summers and more frequent natural disasters, rather than just getting used to them. Every day, instead of thinking about breakfast, they have to think about death.

    Call me a pessimist or call me a humanist, but I don’t see human nature fundamentally changing anytime soon. I can run ten thousand scenarios through my model, and in not one of them do I see the two-degree target being met.

    To judge from recent opinion polls, which show that a majority of Americans (many of them Republican) are pessimistic about the planet’s future, and from the success of a book like David Wallace-Wells’s harrowing “The Uninhabitable Earth,” which was released this year, I’m not alone in having reached this conclusion. But there continues to be a reluctance to broadcast it. Some climate activists argue that if we publicly admit that the problem can’t be solved, it will discourage people from taking any ameliorative action at all. This seems to me not only a patronizing calculation but an ineffectual one, given how little progress we have to show for it to date. The activists who make it remind me of the religious leaders who fear that, without the promise of eternal salvation, people won’t bother to behave well. In my experience, nonbelievers are no less loving of their neighbors than believers. And so I wonder what might happen if, instead of denying reality, we told ourselves the truth.

    First of all, even if we can no longer hope to be saved from two degrees of warming, there’s still a strong practical and ethical case for reducing carbon emissions. In the long run, it probably makes no difference how badly we overshoot two degrees; once the point of no return is passed, the world will become self-transforming. In the shorter term, however, half measures are better than no measures. Halfway cutting our emissions would make the immediate effects of warming somewhat less severe, and it would somewhat postpone the point of no return. The most terrifying thing about climate change is the speed at which it’s advancing, the almost monthly shattering of temperature records. If collective action resulted in just one fewer devastating hurricane, just a few extra years of relative stability, it would be a goal worth pursuing.

    In fact, it would be worth pursuing even if it had no effect at all. To fail to conserve a finite resource when conservation measures are available, to needlessly add carbon to the atmosphere when we know very well what carbon is doing to it, is simply wrong. Although the actions of one individual have zero effect on the climate, this doesn’t mean that they’re meaningless. Each of us has an ethical choice to make. During the Protestant Reformation, when “end times” was merely an idea, not the horribly concrete thing it is today, a key doctrinal question was whether you should perform good works because it will get you into Heaven, or whether you should perform them simply because they’re good—because, while Heaven is a question mark, you know that this world would be better if everyone performed them. I can respect the planet, and care about the people with whom I share it, without believing that it will save me.

    More than that, a false hope of salvation can be actively harmful. If you persist in believing that catastrophe can be averted, you commit yourself to tackling a problem so immense that it needs to be everyone’s overriding priority forever. One result, weirdly, is a kind of complacency: by voting for green candidates, riding a bicycle to work, avoiding air travel, you might feel that you’ve done everything you can for the only thing worth doing. Whereas, if you accept the reality that the planet will soon overheat to the point of threatening civilization, there’s a whole lot more you should be doing.

    Our resources aren’t infinite. Even if we invest much of them in a longest-shot gamble, reducing carbon emissions in the hope that it will save us, it’s unwise to invest all of them. Every billion dollars spent on high-speed trains, which may or may not be suitable for North America, is a billion not banked for disaster preparedness, reparations to inundated countries, or future humanitarian relief. Every renewable-energy mega-project that destroys a living ecosystem—the “green” energy development now occurring in Kenya’s national parks, the giant hydroelectric projects in Brazil, the construction of solar farms in open spaces, rather than in settled areas—erodes the resilience of a natural world already fighting for its life. Soil and water depletion, overuse of pesticides, the devastation of world fisheries—collective will is needed for these problems, too, and, unlike the problem of carbon, they’re within our power to solve. As a bonus, many low-tech conservation actions (restoring forests, preserving grasslands, eating less meat) can reduce our carbon footprint as effectively as massive industrial changes.

    All-out war on climate change made sense only as long as it was winnable. Once you accept that we’ve lost it, other kinds of action take on greater meaning. Preparing for fires and floods and refugees is a directly pertinent example. But the impending catastrophe heightens the urgency of almost any world-improving action. In times of increasing chaos, people seek protection in tribalism and armed force, rather than in the rule of law, and our best defense against this kind of dystopia is to maintain functioning democracies, functioning legal systems, functioning communities. In this respect, any movement toward a more just and civil society can now be considered a meaningful climate action. Securing fair elections is a climate action. Combatting extreme wealth inequality is a climate action. Shutting down the hate machines on social media is a climate action. Instituting humane immigration policy, advocating for racial and gender equality, promoting respect for laws and their enforcement, supporting a free and independent press, ridding the country of assault weapons—these are all meaningful climate actions. To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it.

    And then there’s the matter of hope. If your hope for the future depends on a wildly optimistic scenario, what will you do ten years from now, when the scenario becomes unworkable even in theory? Give up on the planet entirely? To borrow from the advice of financial planners, I might suggest a more balanced portfolio of hopes, some of them longer-term, most of them shorter. It’s fine to struggle against the constraints of human nature, hoping to mitigate the worst of what’s to come, but it’s just as important to fight smaller, more local battles that you have some realistic hope of winning. Keep doing the right thing for the planet, yes, but also keep trying to save what you love specifically—a community, an institution, a wild place, a species that’s in trouble—and take heart in your small successes. Any good thing you do now is arguably a hedge against the hotter future, but the really meaningful thing is that it’s good today. As long as you have something to love, you have something to hope for.

    In Santa Cruz, where I live, there’s an organization called the Homeless Garden Project. On a small working farm at the west end of town, it offers employment, training, support, and a sense of community to members of the city’s homeless population. It can’t “solve” the problem of homelessness, but it’s been changing lives, one at a time, for nearly thirty years. Supporting itself in part by selling organic produce, it contributes more broadly to a revolution in how we think about people in need, the land we depend on, and the natural world around us. In the summer, as a member of its C.S.A. program, I enjoy its kale and strawberries, and in the fall, because the soil is alive and uncontaminated, small migratory birds find sustenance in its furrows.

    There may come a time, sooner than any of us likes to think, when the systems of industrial agriculture and global trade break down and homeless people outnumber people with homes. At that point, traditional local farming and strong communities will no longer just be liberal buzzwords. Kindness to neighbors and respect for the land—nurturing healthy soil, wisely managing water, caring for pollinators—will be essential in a crisis and in whatever society survives it. A project like the Homeless Garden offers me the hope that the future, while undoubtedly worse than the present, might also, in some ways, be better. Most of all, though, it gives me hope for today.

  • Can Reading Make You Happier ? | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/can-reading-make-you-happier

    In a secular age, I suspect that reading fiction is one of the few remaining paths to transcendence, that elusive state in which the distance between the self and the universe shrinks. Reading fiction makes me lose all sense of self, but at the same time makes me feel most uniquely myself. As Woolf, the most fervent of readers, wrote, a book “splits us into two parts as we read,” for “the state of reading consists in the complete elimination of the ego,” while promising “perpetual union” with another mind.

    Bibliotherapy is a very broad term for the ancient practice of encouraging reading for therapeutic effect. The first use of the term is usually dated to a jaunty 1916 article in The Atlantic Monthly, “A Literary Clinic.” In it, the author describes stumbling upon a “bibliopathic institute” run by an acquaintance, Bagster, in the basement of his church, from where he dispenses reading recommendations with healing value. “Bibliotherapy is…a new science,” Bagster explains. “A book may be a stimulant or a sedative or an irritant or a soporific. The point is that it must do something to you, and you ought to know what it is. A book may be of the nature of a soothing syrup or it may be of the nature of a mustard plaster.” To a middle-aged client with “opinions partially ossified,” Bagster gives the following prescription: “You must read more novels. Not pleasant stories that make you forget yourself. They must be searching, drastic, stinging, relentless novels.” (George Bernard Shaw is at the top of the list.) Bagster is finally called away to deal with a patient who has “taken an overdose of war literature,” leaving the author to think about the books that “put new life into us and then set the life pulse strong but slow.”

    Today, bibliotherapy takes many different forms, from literature courses run for prison inmates to reading circles for elderly people suffering from dementia. Sometimes it can simply mean one-on-one or group sessions for “lapsed” readers who want to find their way back to an enjoyment of books.

    Berthoud and Elderkin trace the method of bibliotherapy all the way back to the Ancient Greeks, “who inscribed above the entrance to a library in Thebes that this was a ‘healing place for the soul.’ ” The practice came into its own at the end of the nineteenth century, when Sigmund Freud began using literature during psychoanalysis sessions. After the First World War, traumatized soldiers returning home from the front were often prescribed a course of reading. “Librarians in the States were given training on how to give books to WWI vets, and there’s a nice story about Jane Austen’s novels being used for bibliotherapeutic purposes at the same time in the U.K.,” Elderkin says. Later in the century, bibliotherapy was used in varying ways in hospitals and libraries, and has more recently been taken up by psychologists, social and aged-care workers, and doctors as a viable mode of therapy.

    For all avid readers who have been self-medicating with great books their entire lives, it comes as no surprise that reading books can be good for your mental health and your relationships with others, but exactly why and how is now becoming clearer, thanks to new research on reading’s effects on the brain. Since the discovery, in the mid-nineties, of “mirror neurons”—neurons that fire in our brains both when we perform an action ourselves and when we see an action performed by someone else—the neuroscience of empathy has become clearer. A 2011 study published in the Annual Review of Psychology, based on analysis of fMRI brain scans of participants, showed that, when people read about an experience, they display stimulation within the same neurological regions as when they go through that experience themselves. We draw on the same brain networks when we’re reading stories and when we’re trying to guess at another person’s feelings.

    Other studies published in 2006 and 2009 showed something similar—that people who read a lot of fiction tend to be better at empathizing with others (even after the researchers had accounted for the potential bias that people with greater empathetic tendencies may prefer to read novels). And, in 2013, an influential study published in Science found that reading literary fiction (rather than popular fiction or literary nonfiction) improved participants’ results on tests that measured social perception and empathy, which are crucial to “theory of mind”: the ability to guess with accuracy what another human being might be thinking or feeling, a skill humans only start to develop around the age of four.

    But not everybody agrees with this characterization of fiction reading as having the ability to make us behave better in real life. In her 2007 book, “Empathy and the Novel,” Suzanne Keen takes issue with this “empathy-altruism hypothesis,” and is skeptical about whether empathetic connections made while reading fiction really translate into altruistic, prosocial behavior in the world. She also points out how hard it is to really prove such a hypothesis. “Books can’t make change by themselves—and not everyone feels certain that they ought to,” Keen writes. “As any bookworm knows, readers can also seem antisocial and indolent. Novel reading is not a team sport.” Instead, she urges, we should enjoy what fiction does give us, which is a release from the moral obligation to feel something for invented characters—as you would for a real, live human being in pain or suffering—which paradoxically means readers sometimes “respond with greater empathy to an unreal situation and characters because of the protective fictionality.” And she wholeheartedly supports the personal health benefits of an immersive experience like reading, which “allows a refreshing escape from ordinary, everyday pressures.”

    #Bibliothérapie #Lecture #Romans #Psychologie #Empathie