Culture : TV, Movies, Music, Art, and Theatre News and Reviews

/culture

  • The Second Act of Social-Media Activism | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism

    Un article passionnant qui part des analyses de Zeynep Tufekci pour les reconsidérer à partir des mouvements plus récents.

    Some of this story may seem familiar. In “Twitter and Tear Gas: The Power and Fragility of Networked Protest,” from 2017, the sociologist Zeynep Tufekci examined how a “digitally networked public sphere” had come to shape social movements. Tufekci drew on her own experience of the 2011 Arab uprisings, whose early mobilization of social media set the stage for the protests at Gezi Park, in Istanbul, the Occupy action, in New York City, and the Black Lives Matter movement, in Ferguson. For Tufekci, the use of the Internet linked these various, decentralized uprisings and distinguished them from predecessors such as the nineteen-sixties civil-rights movement. Whereas “older movements had to build their organizing capacity first,” Tufekci argued, “modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first protest or march.”

    The speed afforded by such protest is, however, as much its peril as its promise. After a swift expansion, spontaneous movements are often prone to what Tufekci calls “tactical freezes.” Because they are often leaderless, and can lack “both the culture and the infrastructure for making collective decisions,” they are left with little room to adjust strategies or negotiate demands. At a more fundamental level, social media’s corporate infrastructure makes such movements vulnerable to coöptation and censorship. Tufekci is clear-eyed about these pitfalls, even as she rejects the broader criticisms of “slacktivism” laid out, for example, by Evgeny Morozov’s “The Net Delusion,” from 2011.

    “Twitter and Tear Gas” remains trenchant about how social media can and cannot enact reform. But movements change, as does technology. Since Tufekci’s book was published, social media has helped represent—and, in some cases, helped organize—the Arab Spring 2.0, France’s “Yellow Vest” movement, Puerto Rico’s RickyLeaks, the 2019 Iranian protests, the Hong Kong protests, and what we might call the B.L.M. uprising of 2020. This last event, still ongoing, has evinced a scale, creativity, and endurance that challenges those skeptical of the Internet’s ability to mediate a movement. As Tufekci notes in her book, the real-world effects of Occupy, the Women’s March, and even Ferguson-era B.L.M. were often underwhelming. By contrast, since George Floyd’s death, cities have cut billions of dollars from police budgets; school districts have severed ties with police; multiple police-reform-and-accountability bills have been introduced in Congress; and cities like Minneapolis have vowed to defund policing. Plenty of work remains, but the link between activism, the Internet, and material action seems to have deepened. What’s changed?

    The current uprisings slot neatly into Tufekci’s story, with one exception. As the flurry of digital activism continues, there is no sense that this movement is unclear about its aims—abolition—or that it might collapse under a tactical freeze. Instead, the many protest guides, syllabi, Webinars, and the like have made clear both the objectives of abolition and the digital savvy of abolitionists. It is a message so legible that even Fox News grasped it with relative ease. Rachel Kuo, an organizer and scholar of digital activism, told me that this clarity has been shaped partly by organizers who increasingly rely on “a combination of digital platforms, whether that’s Google Drive, Signal, Messenger, Slack, or other combinations of software, for collaboration, information storage, resource access, and daily communications.” The public tends to focus, understandably, on the profusion of hashtags and sleek graphics, but Kuo stressed that it was this “back end” work—an inventory of knowledge, a stronger sense of alliance—that has allowed digital activism to “reflect broader concerns and visions around community safety, accessibility, and accountability.” The uprisings might have unfolded organically, but what has sustained them is precisely what many prior networked protests lacked: preëxisting organizations with specific demands for a better world.

    What’s distinct about the current movement is not just the clarity of its messaging, but its ability to convey that message through so much noise. On June 2nd, the music industry launched #BlackoutTuesday, an action against police brutality that involved, among other things, Instagram and Facebook users posting plain black boxes to their accounts. The posts often included the hashtag #BlackLivesMatter; almost immediately, social-media users were inundated with even more posts, which explained why using that hashtag drowned out crucial information about events and resources with a sea of mute boxes. For Meredith Clark, a media-studies professor at the University of Virginia, the response illustrated how the B.L.M. movement had honed its ability to stick to a program, and to correct those who deployed that program naïvely. In 2014, many people had only a thin sense of how a hashtag could organize actions or establish circles of care. Today, “people understand what it means to use a hashtag,” Clark told me. They use “their own social media in a certain way to essentially quiet background noise” and “allow those voices that need to connect with each other the space to do so.” The #BlackoutTuesday affair exemplified an increasing awareness of how digital tactics have material consequences.

    These networks suggest that digital activism has entered a second act, in which the tools of the Internet have been increasingly integrated into the hard-won structure of older movements. Though, as networked protest grows in scale and popularity, it still risks being hijacked by the mainstream. Any urgent circulation of information—the same memes filtering through your Instagram stories, the same looping images retweeted into your timeline—can be numbing, and any shift in the Overton window means that hegemony drifts with it.

    In “Twitter and Tear Gas,” Tufekci wrote, “The Black Lives Matter movement is young, and how it will develop further capacities remains to be seen.” The movement is older now. It has developed its tactics, its messaging, its reach—but perhaps its most striking new capacity is a sharper recognition of social media’s limits. “This movement has mastered what social media is good for,” Deva Woodly, a professor of politics at the New School, told me. “And that’s basically the meme: it’s the headline.” Those memes, Woodley said, help “codify the message” that leads to broader, deeper conversations offline, which, in turn, build on a long history of radical pedagogy. As more and more of us join those conversations, prompted by the words and images we see on our screens, it’s clear that the revolution will not be tweeted—at least, not entirely.

    #Activisme_connecté #Black_lives_matter #Zeynep_Tufekci #Mèmes #Hashtag_movments #Médias_sociaux

  • The Walkman, Forty Years On | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-walkman-forty-years-on

    Even prior to extended quarantines, lockdowns, and self-isolation, it was hard to imagine life without the electronic escapes of noise-cancelling earbuds, smartphones, and tablets. Today, it seems impossible. Of course, there was most certainly a before and after, a point around which the cultural gravity of our plugged-in-yet-tuned-out modern lives shifted. Its name is Walkman, and it was invented, in Japan, in 1979. After the Walkman arrived on American shores, in June of 1980, under the temporary name of Soundabout, our days would never be the same.

    Up to this point, music was primarily a shared experience: families huddling around furniture-sized Philcos; teens blasting tunes from automobiles or sock-hopping to transistor radios; the bar-room juke; break-dancers popping and locking to the sonic backdrop of a boom box. After the Walkman, music could be silence to all but the listener, cocooned within a personal soundscape, which spooled on analog cassette tape. The effect was shocking even to its creators. “Everyone knows what headphones sound like today,” the late Sony designer Yasuo Kuroki wrote in a Japanese-language memoir, from 1990. “But at the time, you couldn’t even imagine it, and then suddenly Beethoven’s Fifth is hammering between your ears.”

    Sony’s chairman at the time, the genial Akio Morita, was so unsure of the device’s prospects that he ordered a manufacturing run of only thirty thousand, a drop in the bucket compared to such established lines as Trinitron televisions. Initially, he seemed right to be cautious. The Walkman débuted in Japan to near silence. But word quickly spread among the youth of Tokyo about a strange new device that let you carry a soundtrack out of your bedroom, onto commuter trains, and into city streets. Within a year and a half of the appearance of the Walkman, Sony would produce and sell two million of them.

    for the Walkman’s growing numbers of users, isolation was the whole point. “With the advent of the Sony Walkman came the end of meeting people,” Susan Blond, a vice-president at CBS Records, told the Washington Post in 1981. “It’s like a drug: You put the Walkman on and you blot out the rest of the world.” It didn’t take long for academics to coin a term for the phenomenon. The musicologist Shuhei Hosokawa called it “the Walkman effect.”

    There had been popular electronic gadgets before, such as the pocket-sized transistor radios of the fifties, sixties, and seventies. But the Walkman was in another league. Until this point, earphones had been associated with hearing impairment, geeky technicians manning sonar stations, or basement-dwelling hi-fi fanatics. Somehow, a Japanese company had made the high-tech headgear cool.

    “Steve’s point of reference was Sony at the time,” his successor at Apple, John Sculley, recalled. “He really wanted to be Sony. He didn’t want to be IBM. He didn’t want to be Microsoft. He wanted to be Sony.”

    Jobs would get his wish with the début of the iPod, in 2001. It wasn’t the first digital-music player—a South Korean firm had introduced one back in 1998. (That Sony failed to exploit the niche, in spite of having created listening-on-the-go and even owning its own record label, was a testament to how Morita’s unexpected retirement after a stroke, in 1993, hobbled the corporation.) But Apple’s was the most stylish to date, bereft of the complicated and button-festooned interfaces of its competitors, finished in sleek pearlescent plastic and with a satisfying heft that hinted at powerful technologies churning inside. Apple also introduced a tantalizing new method of serving up music: the shuffle, which let listeners remix entire musical libraries into never-ending audio backdrops for their lives. Once again, city streets were the proving ground for this evolution of portable listening technology. “I was on Madison [Ave],” Jobs told Newsweek, in 2004, “and it was, like, on every block, there was someone with white headphones, and I thought, ‘Oh, my God, it’s starting to happen.’ ”

    #Walkman #Sony #Steve_Jobs #Musique #Isolement

  • Bob Dylan’s “Rough and Rowdy Ways” Hits Hard | The New Yorker
    https://www.newyorker.com/culture/culture-desk/bob-dylans-rough-and-rowdy-ways-hits-hard

    few weeks into quarantine, time became liquid. All the usual markers and routines—waking up and lurching down the block to buy a cup of coffee, dressing carefully for a work meeting, corralling friends for karaoke on a Sunday afternoon—were nullified, and the days assumed a soft, amorphous quality. Then, at midnight on a Friday, Bob Dylan suddenly released “Murder Most Foul,” an elegiac, thickset, nearly seventeen-minute song ostensibly about the assassination of J.F.K., but so laden with cultural allusions that it somehow felt even bigger than that. It was the first piece of original music Dylan had released since his album “Tempest,” in 2012, and, on first listen, I found the song surreal. It went on forever; it was over before I knew it. The instrumentation (piano, bowed bass, faint percussion) is hazy and diffuse. Dylan’s vocal phrasing, always careful, felt particularly mesmeric. Rub-a-dub-dub, Altamont, Deep Ellum, Patsy Cline, Air Force One, Thelonious Monk, Bugsy Siegel, Pretty Boy Floyd. What day was it? What year?

    Two months later, “Murder Most Foul” hits different: “We’re gonna kill you with hatred / Without any respect / We’ll mock you and shock you / And we’ll put it in your face,” Dylan sings in the song’s first verse. His voice is withering. “It’s a Murder. Most. Foul.” Dylan has spent decades seeing and chronicling American injustice. Forty-four years ago, on “Hurricane,” he sang frankly about police brutality: “If you’re black, you might as well not show up on the street / ’Less you want to draw the heat.”

    This week, Dylan will release “Rough and Rowdy Ways,” a gruesome, crowded, marauding album that feels unusually attuned to its moment. Unlike many artists who reacted to the pandemic with a kind of dutiful tenderness—“Let me help with my song!”—Dylan has decided not to offer comfort, nor to hint at some vague solidarity. Lyrically, he’s either cracking weird jokes (“I’ll take the ‘Scarface’ Pacino and the ‘Godfather’ Brando / Mix ’em up in a tank and get a robot commando”) or operating in a cold, disdainful, it-ain’t-me-babe mode. Dylan’s musicianship is often undersold by critics, but on “Rough and Rowdy Ways” it’s especially difficult to focus on anything other than his voice; at seventy-nine, he sounds warmed up and self-assured. There are moments when he appears to be chewing on his own mortality—he recently told the Times that he thinks about death “in general terms, not in a personal way”—but mostly he sounds elegant and steady, a vocal grace he might have acquired while recording all those standards. “Three miles north of Purgatory, one step from the great beyond,” he sings calmly on “Crossing the Rubicon.”
    Video From The New Yorker
    Janelle Monáe on Growing Up Queer and Black

    It’s sometimes hard to think of Dylan doing normal, vulnerable things like falling in love, though he sings about heartache—his compulsion toward it, his indulgence of its wounds—constantly. My favorite track on “Rough and Rowdy Ways” is “I’ve Made Up My Mind to Give Myself to You,” a gentle ballad about deliberately resigning oneself to love and its demands. It’s not the album’s richest or most complicated song—“Key West (Philosopher Pirate)” is Shakespearean—but I’ve been listening to it constantly, mostly for its evocation of a certain kind of golden-hour melancholy. Imagine sitting on a porch or on the front steps of an apartment building, nursing a big drink in a stupid glass, and reluctantly accepting your fate: “Been thinking it all over / And I thought it all through / I’ve made up my mind / To give myself to you.” It’s not quite romantic, but, then again, neither is love. The song’s emotional climax comes less than halfway through, when Dylan announces, “From Salt Lake City to Birmingham / From East L.A. to San Antone / I don’t think I could bear to live my life alone!” Ever so briefly, his voice goes feral.

    Dylan is a voracious student of United States history—he can, and often does, itemize the various atrocities that have been committed in service to country—and “Rough and Rowdy Ways” could be understood as a glib summation of America’s outlaw origins, and of the confused, dangerous, and often haphazard way that we preserve democracy. He seems to understand instinctively that American history is not a series of fixed points but an unmoored and constantly evolving idea that needs to be reëstablished each day—things don’t happen once and then stop happening. In this sense, linear time becomes an invention; every moment is this moment. This is why, on “Murder Most Foul,” Buster Keaton and Dickey Betts and the Tulsa race massacre of 1921 and Lindsey Buckingham and Stevie Nicks and the Birdman of Alcatraz can coexist, harmoniously, in a single verse. That Dylan named another dense, allusive song on the album, “I Contain Multitudes,” after a much-quoted stanza from Walt Whitman’s “Song of Myself”—“Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.)”—also seems to indicate some reckoning with the vastness and immediacy of American culture. (Dylan’s interests are so wonderfully obtuse and far-ranging that it’s sometimes hard to discern precisely what he’s referring to: Is the “Cry Me a River” that he mentions on “Murder Most Foul” a reference to the jazz standard made famous by the actress Julie London, in 1955, or to the dark, cluttered revenge jam that Justin Timberlake supposedly wrote about Britney Spears, in 2002? My money is on the latter.)

    Now thirty-nine albums in, it’s tempting to dismiss Dylan as sepia-toned—a professor emeritus, a museum piece, a Nobel laureate coasting through his sunset years, the mouthpiece of some bygone generation but certainly not this one. (It’s hard, admittedly, to imagine bars of “I Contain Multitudes” finding viral purchase on TikTok.) The sheer volume of writing about his life and music suggests a completed arc, which makes it easy to presume that there’s nothing useful, interesting, or pertinent left to say. Yet, for me, Dylan’s vast and intersectional understanding of the American mythos feels so plainly and uniquely relevant to the grimness and magnitude of these past few months. As the country attempts to metabolize the murder of George Floyd, it is also attempting to reckon with every crooked, brutal, odious, or unjust murder of a black person—to understand a cycle that began centuries ago and somehow continues apace. What is American racism? It’s everything, Dylan insists. Indiana Jones and J.F.K. and Elvis Presley and Jimmy Reed—nothing exists without the rest of it. None of us are absolved, and none of us are spared.
    Amanda Petrusich is a staff writer at The New Yorker and the author of “Do Not Sell at Any Price: The Wild, Obsessive Hunt for the World’s Rarest 78rpm Records.”

    #Bob_Dylan #Music

    • Drôle d’interview qui ressemble plus à une discussion entre copines, la jeune journaliste et la vieille féministe gauchiste (maoïste, dit-elle). Les questions sont presque plus intéressantes que les réponses. Il est question de mayonnaise et de #Marie_Kondo dont #Barbara_Ehrenreich a critiqué le mauvais anglais.

      Well, I think what I said was really stupid—ill considered and written quickly and I was mortified. Some editor had asked me to write something about Marie Kondo, so I watched part of her show on Netflix, and I was appalled. I hope that’s not intrinsically bad. I’ll admit something to you—one thing that was also going on was that my mother would just throw all my clothes out of the chest of drawers and onto the floor when she thought things were messy. Something about that got triggered with Marie Kondo and I felt this sort of rage, not that that’s an excuse or anything.

      #Rebecca_Solnit aussi.

      Bizarre !

  • The Faces of a New Union Movement | The New Yorker
    https://www.newyorker.com/culture/photo-booth/the-faces-of-a-new-union-movement

    Haag is part of a wave of young workers who have been unionizing in sectors with little or no tradition of unions: art museums, including the Guggenheim and the New Museum, but also tech companies, digital-media brands, political campaigns, even cannabis shops. At Google, around ninety contract workers in Pittsburgh recently formed a union—a significant breakthrough, even if they represent just a tiny fraction of the company’s workforce. More than thirty digital publications, including Vox, Vice, Salon, Slate, and HuffPost, have unionized. (The editorial staff of The New Yorker unionized in 2018.) Last March, Bernie Sanders’s campaign became the first major-party Presidential campaign in history with a unionized workforce; the campaigns of Eric Swalwell, Julián Castro, and Elizabeth Warren unionized soon after. At Grinnell College, in Iowa, students working in the school’s dining hall unionized in 2016, becoming one of the nation’s only undergraduate-student labor unions. Sam Xu, the union’s twenty-one-year-old former president, said, “Mark Zuckerberg was running Facebook out of his dorm room. I’m running a union out of my dorm room.”

    The American labor movement has been reinvigorated in recent years, with the teacher-led Red for Ed strikes, the General Motors walkout, and the Fight for $15’s push to raise the minimum wage. A Gallup poll last summer found that sixty-four per cent of Americans approve of unions—one of the highest ratings recorded in the past fifty years. The highest rate of approval came from young people: sixty-seven per cent among eighteen-to-thirty-four-year-olds. Rebecca Givan, an associate professor of labor studies at Rutgers University, said that many young people are interested in joining unions because they’re “feeling the pinch”—many “have a tremendous amount of student debt, and, if they’re living in cities, they’re struggling to afford housing.” Givan added that many feel considerable insecurity about their jobs. “The industries that they’re organizing in are volatile,” she said. Jake Rosenfeld, an associate professor of sociology at Washington University, said, “Underemployed college-educated workers aren’t buying what was until recently the prevailing understanding of our economy: that hard work and a college degree was a ticket to a stable, well-paying job.”

    #Syndicats #Gig_economy

  • A Day of Reckoning for Michael Jackson with “Leaving Neverland” | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/a-day-of-reckoning-for-michael-jackson-with-leaving-neverland

    It is hideous, but true, that allegations of this sort have historically been treated differently when the accused is a virtuosic and deeply beloved male performer: Miles Davis allegedly beat his wives; Jimmy Page allegedly had a relationship with a fourteen-year-old girl; the late rapper XXXTentacion allegedly battered his ex-girlfriend when she was pregnant; Chuck Berry was convicted of transporting a minor across state lines for “immoral purposes”; and on, and on, and on, until the entire history of Western music collapses in a haze of abuse and transgression, unable to survive any sort of moral dragnet

  • Bond Touch Bracelets and the New Frontiers of Digital Dating | The New Yorker
    https://www.newyorker.com/culture/culture-desk/bond-touch-bracelets-and-the-new-frontiers-of-digital-dating

    Few things feel as fraught, in the modern age, as the long-distance relationship. The hazards of digital romance have been well chronicled, perhaps most prominently in the documentary and subsequent TV series “Catfish,” which exposed viewers to a new and expansive genre of horror. To “catfish” someone, in common parlance, is to meet a person online through dating apps, social-media sites, or chat rooms, and to seduce them using fake photos and fictional biographical details. On the reality-TV version of “Catfish,” lovesick victims confront those who deceived them, in grim, emotional scenes of revelation and heartbreak. Throw teens into the mix, and the narrative can turn even more ghastly. One thinks of the tabloid story of Michelle Carter and her boyfriend, Conrad Roy III, two teen-agers whose relationship developed mostly over text and Facebook message. In 2017, Carter was convicted of involuntary manslaughter for encouraging Roy to kill himself—even though the pair had met only a handful of times. Messages between the couple revealed the kind of twisted emotional dynamic that can emerge in the absence of physical proximity.

    Despite these stories, digital-first (and digital-only) relationships continue to thrive. With online dating now a fact of life, a new bogeyman, virtual-reality dating, has taken its place, threatening to cut the final cord between romance and the real world. The platform VRLFP—Virtual Reality Looking For Partner—advertises itself as the perfect solution for daters who’d rather not deal with the hassles of Tinder flirting or late-night bar crawls. (“Grab a coffee, visit an amusement park, or go to the moon without leaving your home and without spending a dime,” the VRLFP site reads. “VR makes long-distance relationships work.”) This is to say nothing of the companies designing humanoid sex robots, or the scientists designing phone cases that feel like human flesh.

    Perhaps the most innocuous entry in the digital-dating marketplace is a new product called Bond Touch, a set of electronic bracelets meant for long-distance daters. (Shawn Mendes and Camila Cabello, one of the most P.D.A.-fluent couples of our time, were recently spotted wearing the bracelets.) Unlike the cold fantasias of VR courtship, Bond Touch bracelets are fundamentally wholesome, and they reduce long-distance relationships to a series of mundane concerns. How can you sustain a healthy amount of communication with a long-distance partner? How can you feel close to someone who’s physically distant? And how do you simulate the wordless gestures of affection that account for so much of personal connection? Created in Silicon Valley by a developer named Christoph Dressel—who is also the C.O.O. of an environmentally minded technology firm called Impossible—the bracelets are slim, chic devices that resemble Fitbits. By wearing one, a person can send a tap that generates a light vibration and a colored blink on the screen of a partner’s bracelet. The bracelets are also linked through an app that provides information about a partner’s weather and time zone, but their primary function is to embody presence. Like Facebook’s early “Poke” feature, they impart the same message as a shoulder squeeze or a gaze across the room at a party: “I’m here, and I’m thinking about you.”

    In theory, the bracelets could service any form of long-distance relationship—military members and their families, partners separated by jobs or school, siblings living in different cities—but they seem to be most popular among teen-agers who’ve forged romantic relationships online. Bond Touch is a hot topic of discussion in certain corners of YouTube and Reddit, where users provide excessively detailed reviews of their bracelet-wearing experience. These users seem less concerned with simulating touch or affection than with communicating when they don’t have access to their phone, namely during class or at part-time jobs. They often develop Morse-code-like systems to lend layers of meaning to their taps. “When I really want his attention, I just send a very long one, and then he’s, like, ‘What do you want?’ . . . Three taps means ‘I love you,’ ” one YouTuber, HeyItsTay, explains, in a video that’s garnered over 1.8 million views. Safety is also a chief concern: almost all of the vloggers explain that Bond Touch is an effective way of letting someone know that you’re O.K., even if you’re not responding to text messages or Instagram DMs.

    Something like a Bond Touch bracelet ostensibly solves a communication problem, but it also creates one—the problem of over-availability, in which no one can be unreachable and no sentiment goes unexpressed. (One can imagine the anxieties that might arise from a set of unanswered taps, and the bracelets have already inspired plenty of off-label uses. “Great way for cheating in class,” one user commented on HeyItsTay’s Bond Touch video.) Not all technology is corrosive, of course, but there is something disheartening about a relationship wherein digital bracelets are meant to replace the rhythms of conversation and the ebbs and flows of emotional connection. The problem has less to do with the bracelets themselves than with the trend that they advance. In lieu of facetime, we seem willing to accept even the most basic forms of emotional stimulus, no matter how paltry a substitute they present.

    Reading about Bond Touch, an episode of the 2019 breakout comedy “PEN15” came to mind. The show is set in the era of the dial-up connection, and at one point its main characters, the awkward middle schoolers Anna and Maya, experiment with AOL Instant Messenger. Maya meets a guy named “Flymiamibro22” in a chat room, and their conversation quickly sparks an infatuation—and, eventually, something resembling love. “I love you more than I love my own DAD!” Maya tells Flymiamibro22 in a violent flurry of messages. Flymiamibro22 is a self-described “gym rat,” but in reality he’s one of Maya’s classmates and friends, Sam, posing online as an older guy. At the peak of her obsession, Maya begs her crush to meet her in person, and they arrange a date at a local bowling alley. FlyMiamiBro never materializes, but Sam reveals his true identity soon after, at a school dance. This admission produces a rush of fury and humiliation. But it also, finally, leads to catharsis, the growth and wisdom that flows from a confrontation with reality. That sort of confrontation seems increasingly avoidable today.

    Carrie Battan began contributing to The New Yorker in 2015 and became a staff writer in 2018.

    #Pratiques_numériques #Sites_rencontre #Dating #Bracelet #Culture_numérique

  • When the Beatles Walked Offstage: Fifty Years of “Abbey Road” | The New Yorker
    https://www.newyorker.com/culture/culture-desk/when-the-beatles-walked-offstage-fifty-years-of-abbey-road

    Excellent article sur le plus grand album de la pop musique.

    In the spring of 1969, Paul McCartney telephoned George Martin to ask if he would be willing to work with the Beatles on a new album they planned to record in the months ahead. Martin, who was widely regarded as the most accomplished pop-record producer in the world, had overseen the making of all nine albums and nineteen singles that the Beatles had released in Britain since their début on E.M.I.’s Parlophone label, in 1962. His reputation was synonymous with that of the group, and the fact that McCartney felt a need to ask him about his availability dramatized how much the Beatles’ professional circumstances had changed since the release of the two-record set known as the White Album, in the fall of 1968. In Martin’s view, the five months of tension and drama it took to make that album, followed by the fiasco of “Get Back,” an ill-fated film, concert, and recording project that ended inconclusively in January, 1969, had turned his recent work with the Beatles into a “miserable experience.”

    “After [‘Get Back’] I thought it was the end of the road for all of us,” he said later. “I didn’t really want to work with them anymore because they were becoming unpleasant people, to themselves as well as to other people. So I was quite surprised when Paul rang me up and asked me to produce another record for them. He said, ‘Will you really produce it?’ And I said, ‘If I’m really allowed to produce it. If I have to go back and accept a lot of instructions that I don’t like, then I won’t do it.’ ” After receiving McCartney’s assurance that he would indeed have a free hand, Martin booked a solid block of time at Abbey Road studios from the first of July to the end of August.

    To speak of “sides” is to acknowledge that “Abbey Road,” like most Beatles albums, was originally released as a double-sided vinyl LP. This was the format with which the group had revolutionized the recording industry in the sixties, when its popularity, self-sufficiency, and burgeoning artistic ambition helped to establish the self-written album as the principal medium of rock. Earlier, in the fifties, when “long-playing” records first became available, their selling point was their capacity. Unlike the 78-r.p.m. records they replaced, LPs could hold more than twenty minutes of music per side, which made them an ideal format for the extended performances of classical music, Broadway shows, film soundtracks, modern jazz, and standup comedy that accounted for the lion’s share of the record market at the time. Best-selling pop singers like Frank Sinatra, Harry Belafonte, and Elvis Presley also capitalized on the potential of the LP, not least because a prime virtue of albums in the pop market was their packaging. The records were sold in foot-square cardboard sleeves, faced with a photograph or illustration that served as an advertisement for the product within. By providing a portrait of the artist and a platform for the sort of promotional copy that had previously been confined to fan magazines, album “jackets” served as a tangible accessory to the experience of record listening. LP covers became an established form of graphic art, and the high standard of the graphic design on the Beatles’ early albums was one of the ways that Brian Epstein and George Martin sought to distinguish the group from the patronizing stereotypes that applied to teen-age pop.

    All of this, it goes without saying, is ancient history in an era of digital streaming and shuffling, which threatens the very concept of a record album as a cohesive work of art. In this sense, the fiftieth anniversary reissue of “Abbey Road” is an anachronism, a throwback to a time when an LP cover could serve as a cultural icon and the order of the songs on the two sides of an album became etched on its listeners’ minds. In the iconography of Beatles album covers, “Abbey Road” ranks with the conclave of culture heroes on the front of “Sgt. Pepper” and the mysterious side-lit portrait on the group’s first Capitol LP. Yet, like so much else on the album, its cover was a product of compromise. After entertaining the notion of naming the album “Everest” and travelling to Nepal to have themselves photographed in front of the world’s tallest peak, the Beatles elected to simply walk out the door of the studio on an August afternoon. The famous tableau of the four of them striding purposefully across the now-landmarked “zebra crossing”—Lennon in white, Starr in black, McCartney in gray, and Harrison in hippie denim from head to toe—advertised the differences in a band that had first captured the attention of the world in matching suits and haircuts. But its iconic status owed to the way it came to serve, in retrospect, as a typically droll image of the Beatles, walking off the stage of their career as a group.

    To return to Ned Rorem’s formulation: How good were the Beatles, notwithstanding the fact that everyone knew they were good? Good enough to produce this self-allusive masterpiece with their dying breath as a band. Good enough to enlist the smoke and mirrors of a modern recording studio to simulate the merger of musical sensibilities that they had once achieved by means of an unprecedented concentration and collaboration of sovereign talent. In this sense, “Abbey Road” memorializes a paradox of the group. The singing, songwriting, and playing on the album affirm the extent to which all four of the Beatles became consummate musical professionals in the course of their eight-year career. But the ending of that career affirms the extent to which these four “mates” from Liverpool, whose lives were transformed by such a surfeit of wealth and fame, never gave a thought to professionalizing their personal relationships with one another.

    Their contemporaries, such as the Rolling Stones and the Who, would carry on for decades as lucrative rock franchises, long after the bonds of adolescent friendship that originally joined them together had withered away. But, for the Beatles, whose adolescent friendship institutionalized the archetype of the rock group, a ubiquitous mode of musical organization that has endured to the present day, the deterioration in their personal relations completely outweighed the financial incentives that came with their status as the most successful musical artists of their time. From the beginning, they were understood to be a “band” in both senses of the word: as musicians, of course, but also, on a more elemental level, as a group of young men who shared a sense of identity, solidarity, and purpose. “I’ve compared it to a marriage,” Lennon would say. “Up until then, we really believed intensely in what we were doing, and the product we put out, and everything had to be just right. Suddenly we didn’t believe. And that was the end of it.”

    #Musique #The_Beatles #Abbey_Road #Vinyls

  • Ric Ocasek’s Eternal Cool | The New Yorker
    https://www.newyorker.com/culture/postscript/ric-ocaseks-eternal-cool

    Ocasek sang most of their other hits. The Cars combined the pleasures of New Wave synth modernity with the pleasures of bar-band guitar rock, in a style made especially distinctive by Ocasek’s borderline eerie vocals and aesthetic: starkly bold attire, black shades, black hairdo with a hint of fright wig. As a singer and a presence, Ocasek both channelled powerful emotion and seemed to float above it, as mysteriously as the ever-present sunglasses that obscured the look in his eyes. The Cars released their self-titled début in 1978; it was an instant classic. (I’m not sure I’ve ever listened to FM radio in my home town without hearing one of its songs in a rock block.) The album’s first track, “Good Times Roll,” is a strangely dispassionate call to revelry: mid-tempo, instructing, cool, hovering aloof above the notion of good times. It begins with spare, locomotive guitar. Ocasek commands us to let the good times roll, knock us around, make us a clown, leave us up in the air—but it doesn’t sound as if he’s going to do these things. Whereas the beloved 1956 Shirley and Lee song “Let the Good Times Roll” feels like a party—an instant get-on-the-dance-floor—the Cars are doing something stranger. Rock and roll is all about good times, but the Cars aren’t going to just lob them at us: instead, Ocasek invokes them for us to engage in, then leans back to watch what we do, like some kind of good-times fetishist.

    His vocals on the album’s other singles retain that weird cool, but they add emotions we can detect, even feel. “My Best Friend’s Girl” begins with penetrating guitar, hand claps, and vocals, but then plunges into friendly pop and gang’s-all-here backup singing. When Ocasek sings “She’s dancing ’neath the starry sky” and adds, “She’s my best friend’s girl / and she used to be mine,” it hurts, sweetly, and we begin to understand him as a human.

    Since I learned of Ocasek’s death, I’ve been pondering the nature of the Cars’ particular sound, and how, early on, they differed from their fellow New Wave artists and synth enthusiasts. For one thing, they employed the sounds of modernity and machinery without being woo-woo about it; they weren’t art rock à la Bowie and Brian Eno, or Kraftwerk, or Joy Division. Today, I saw that, in 1978, Ocasek, when asked by the Globe about rumors that the Cars had sought production by Eno, said, “No, we have enough oblique strategy already. If we had any more, we’d be on a space capsule headed for Mars.” They didn’t want Mars—they wanted to go their own way, unique and on the ground

    .
    #Musique #Ric_Ocasek #The_Cars

  • Un texte de l’écrivain #Jonathan_Franzen, qui fait beaucoup jaser... à croire que la collapsologie a mis plus de temps à rejoindre les grand médias aux États-Unis :

    What If We Stopped Pretending ?
    Jonathan Franzen, The New-Yorker, le 8 septembre 2019
    https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending

    On l’ajoute à la troisième compilation :
    https://seenthis.net/messages/680147

    #effondrement #collapsologie #catastrophe #fin_du_monde #it_has_begun #Anthropocène #capitalocène #USA

    Mais aussi aux évaluations et critiques des #actions_individuelles compilées ici :
    https://seenthis.net/messages/794181

    Semi #paywall alors :

    “There is infinite hope,” Kafka tells us, “only not for us.” This is a fittingly mystical epigram from a writer whose characters strive for ostensibly reachable goals and, tragically or amusingly, never manage to get any closer to them. But it seems to me, in our rapidly darkening world, that the converse of Kafka’s quip is equally true: There is no hope, except for us.

    I’m talking, of course, about climate change. The struggle to rein in global carbon emissions and keep the planet from melting down has the feel of Kafka’s fiction. The goal has been clear for thirty years, and despite earnest efforts we’ve made essentially no progress toward reaching it. Today, the scientific evidence verges on irrefutable. If you’re younger than sixty, you have a good chance of witnessing the radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought. If you’re under thirty, you’re all but guaranteed to witness it.

    If you care about the planet, and about the people and animals who live on it, there are two ways to think about this. You can keep on hoping that catastrophe is preventable, and feel ever more frustrated or enraged by the world’s inaction. Or you can accept that disaster is coming, and begin to rethink what it means to have hope.

    Even at this late date, expressions of unrealistic hope continue to abound. Hardly a day seems to pass without my reading that it’s time to “roll up our sleeves” and “save the planet”; that the problem of climate change can be “solved” if we summon the collective will. Although this message was probably still true in 1988, when the science became fully clear, we’ve emitted as much atmospheric carbon in the past thirty years as we did in the previous two centuries of industrialization. The facts have changed, but somehow the message stays the same.

    Psychologically, this denial makes sense. Despite the outrageous fact that I’ll soon be dead forever, I live in the present, not the future. Given a choice between an alarming abstraction (death) and the reassuring evidence of my senses (breakfast!), my mind prefers to focus on the latter. The planet, too, is still marvelously intact, still basically normal—seasons changing, another election year coming, new comedies on Netflix—and its impending collapse is even harder to wrap my mind around than death. Other kinds of apocalypse, whether religious or thermonuclear or asteroidal, at least have the binary neatness of dying: one moment the world is there, the next moment it’s gone forever. Climate apocalypse, by contrast, is messy. It will take the form of increasingly severe crises compounding chaotically until civilization begins to fray. Things will get very bad, but maybe not too soon, and maybe not for everyone. Maybe not for me.

    Some of the denial, however, is more willful. The evil of the Republican Party’s position on climate science is well known, but denial is entrenched in progressive politics, too, or at least in its rhetoric. The Green New Deal, the blueprint for some of the most substantial proposals put forth on the issue, is still framed as our last chance to avert catastrophe and save the planet, by way of gargantuan renewable-energy projects. Many of the groups that support those proposals deploy the language of “stopping” climate change, or imply that there’s still time to prevent it. Unlike the political right, the left prides itself on listening to climate scientists, who do indeed allow that catastrophe is theoretically avertable. But not everyone seems to be listening carefully. The stress falls on the word theoretically.

    Our atmosphere and oceans can absorb only so much heat before climate change, intensified by various feedback loops, spins completely out of control. The consensus among scientists and policy-makers is that we’ll pass this point of no return if the global mean temperature rises by more than two degrees Celsius (maybe a little more, but also maybe a little less). The I.P.C.C.—the Intergovernmental Panel on Climate Change—tells us that, to limit the rise to less than two degrees, we not only need to reverse the trend of the past three decades. We need to approach zero net emissions, globally, in the next three decades.

    This is, to say the least, a tall order. It also assumes that you trust the I.P.C.C.’s calculations. New research, described last month in Scientific American, demonstrates that climate scientists, far from exaggerating the threat of climate change, have underestimated its pace and severity. To project the rise in the global mean temperature, scientists rely on complicated atmospheric modelling. They take a host of variables and run them through supercomputers to generate, say, ten thousand different simulations for the coming century, in order to make a “best” prediction of the rise in temperature. When a scientist predicts a rise of two degrees Celsius, she’s merely naming a number about which she’s very confident: the rise will be at least two degrees. The rise might, in fact, be far higher.

    As a non-scientist, I do my own kind of modelling. I run various future scenarios through my brain, apply the constraints of human psychology and political reality, take note of the relentless rise in global energy consumption (thus far, the carbon savings provided by renewable energy have been more than offset by consumer demand), and count the scenarios in which collective action averts catastrophe. The scenarios, which I draw from the prescriptions of policy-makers and activists, share certain necessary conditions.

    The first condition is that every one of the world’s major polluting countries institute draconian conservation measures, shut down much of its energy and transportation infrastructure, and completely retool its economy. According to a recent paper in Nature, the carbon emissions from existing global infrastructure, if operated through its normal lifetime, will exceed our entire emissions “allowance”—the further gigatons of carbon that can be released without crossing the threshold of catastrophe. (This estimate does not include the thousands of new energy and transportation projects already planned or under construction.) To stay within that allowance, a top-down intervention needs to happen not only in every country but throughout every country. Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks.

    The actions taken by these countries must also be the right ones. Vast sums of government money must be spent without wasting it and without lining the wrong pockets. Here it’s useful to recall the Kafkaesque joke of the European Union’s biofuel mandate, which served to accelerate the deforestation of Indonesia for palm-oil plantations, and the American subsidy of ethanol fuel, which turned out to benefit no one but corn farmers.

    Finally, overwhelming numbers of human beings, including millions of government-hating Americans, need to accept high taxes and severe curtailment of their familiar life styles without revolting. They must accept the reality of climate change and have faith in the extreme measures taken to combat it. They can’t dismiss news they dislike as fake. They have to set aside nationalism and class and racial resentments. They have to make sacrifices for distant threatened nations and distant future generations. They have to be permanently terrified by hotter summers and more frequent natural disasters, rather than just getting used to them. Every day, instead of thinking about breakfast, they have to think about death.

    Call me a pessimist or call me a humanist, but I don’t see human nature fundamentally changing anytime soon. I can run ten thousand scenarios through my model, and in not one of them do I see the two-degree target being met.

    To judge from recent opinion polls, which show that a majority of Americans (many of them Republican) are pessimistic about the planet’s future, and from the success of a book like David Wallace-Wells’s harrowing “The Uninhabitable Earth,” which was released this year, I’m not alone in having reached this conclusion. But there continues to be a reluctance to broadcast it. Some climate activists argue that if we publicly admit that the problem can’t be solved, it will discourage people from taking any ameliorative action at all. This seems to me not only a patronizing calculation but an ineffectual one, given how little progress we have to show for it to date. The activists who make it remind me of the religious leaders who fear that, without the promise of eternal salvation, people won’t bother to behave well. In my experience, nonbelievers are no less loving of their neighbors than believers. And so I wonder what might happen if, instead of denying reality, we told ourselves the truth.

    First of all, even if we can no longer hope to be saved from two degrees of warming, there’s still a strong practical and ethical case for reducing carbon emissions. In the long run, it probably makes no difference how badly we overshoot two degrees; once the point of no return is passed, the world will become self-transforming. In the shorter term, however, half measures are better than no measures. Halfway cutting our emissions would make the immediate effects of warming somewhat less severe, and it would somewhat postpone the point of no return. The most terrifying thing about climate change is the speed at which it’s advancing, the almost monthly shattering of temperature records. If collective action resulted in just one fewer devastating hurricane, just a few extra years of relative stability, it would be a goal worth pursuing.

    In fact, it would be worth pursuing even if it had no effect at all. To fail to conserve a finite resource when conservation measures are available, to needlessly add carbon to the atmosphere when we know very well what carbon is doing to it, is simply wrong. Although the actions of one individual have zero effect on the climate, this doesn’t mean that they’re meaningless. Each of us has an ethical choice to make. During the Protestant Reformation, when “end times” was merely an idea, not the horribly concrete thing it is today, a key doctrinal question was whether you should perform good works because it will get you into Heaven, or whether you should perform them simply because they’re good—because, while Heaven is a question mark, you know that this world would be better if everyone performed them. I can respect the planet, and care about the people with whom I share it, without believing that it will save me.

    More than that, a false hope of salvation can be actively harmful. If you persist in believing that catastrophe can be averted, you commit yourself to tackling a problem so immense that it needs to be everyone’s overriding priority forever. One result, weirdly, is a kind of complacency: by voting for green candidates, riding a bicycle to work, avoiding air travel, you might feel that you’ve done everything you can for the only thing worth doing. Whereas, if you accept the reality that the planet will soon overheat to the point of threatening civilization, there’s a whole lot more you should be doing.

    Our resources aren’t infinite. Even if we invest much of them in a longest-shot gamble, reducing carbon emissions in the hope that it will save us, it’s unwise to invest all of them. Every billion dollars spent on high-speed trains, which may or may not be suitable for North America, is a billion not banked for disaster preparedness, reparations to inundated countries, or future humanitarian relief. Every renewable-energy mega-project that destroys a living ecosystem—the “green” energy development now occurring in Kenya’s national parks, the giant hydroelectric projects in Brazil, the construction of solar farms in open spaces, rather than in settled areas—erodes the resilience of a natural world already fighting for its life. Soil and water depletion, overuse of pesticides, the devastation of world fisheries—collective will is needed for these problems, too, and, unlike the problem of carbon, they’re within our power to solve. As a bonus, many low-tech conservation actions (restoring forests, preserving grasslands, eating less meat) can reduce our carbon footprint as effectively as massive industrial changes.

    All-out war on climate change made sense only as long as it was winnable. Once you accept that we’ve lost it, other kinds of action take on greater meaning. Preparing for fires and floods and refugees is a directly pertinent example. But the impending catastrophe heightens the urgency of almost any world-improving action. In times of increasing chaos, people seek protection in tribalism and armed force, rather than in the rule of law, and our best defense against this kind of dystopia is to maintain functioning democracies, functioning legal systems, functioning communities. In this respect, any movement toward a more just and civil society can now be considered a meaningful climate action. Securing fair elections is a climate action. Combatting extreme wealth inequality is a climate action. Shutting down the hate machines on social media is a climate action. Instituting humane immigration policy, advocating for racial and gender equality, promoting respect for laws and their enforcement, supporting a free and independent press, ridding the country of assault weapons—these are all meaningful climate actions. To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it.

    And then there’s the matter of hope. If your hope for the future depends on a wildly optimistic scenario, what will you do ten years from now, when the scenario becomes unworkable even in theory? Give up on the planet entirely? To borrow from the advice of financial planners, I might suggest a more balanced portfolio of hopes, some of them longer-term, most of them shorter. It’s fine to struggle against the constraints of human nature, hoping to mitigate the worst of what’s to come, but it’s just as important to fight smaller, more local battles that you have some realistic hope of winning. Keep doing the right thing for the planet, yes, but also keep trying to save what you love specifically—a community, an institution, a wild place, a species that’s in trouble—and take heart in your small successes. Any good thing you do now is arguably a hedge against the hotter future, but the really meaningful thing is that it’s good today. As long as you have something to love, you have something to hope for.

    In Santa Cruz, where I live, there’s an organization called the Homeless Garden Project. On a small working farm at the west end of town, it offers employment, training, support, and a sense of community to members of the city’s homeless population. It can’t “solve” the problem of homelessness, but it’s been changing lives, one at a time, for nearly thirty years. Supporting itself in part by selling organic produce, it contributes more broadly to a revolution in how we think about people in need, the land we depend on, and the natural world around us. In the summer, as a member of its C.S.A. program, I enjoy its kale and strawberries, and in the fall, because the soil is alive and uncontaminated, small migratory birds find sustenance in its furrows.

    There may come a time, sooner than any of us likes to think, when the systems of industrial agriculture and global trade break down and homeless people outnumber people with homes. At that point, traditional local farming and strong communities will no longer just be liberal buzzwords. Kindness to neighbors and respect for the land—nurturing healthy soil, wisely managing water, caring for pollinators—will be essential in a crisis and in whatever society survives it. A project like the Homeless Garden offers me the hope that the future, while undoubtedly worse than the present, might also, in some ways, be better. Most of all, though, it gives me hope for today.

  • Can Reading Make You Happier ? | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/can-reading-make-you-happier

    In a secular age, I suspect that reading fiction is one of the few remaining paths to transcendence, that elusive state in which the distance between the self and the universe shrinks. Reading fiction makes me lose all sense of self, but at the same time makes me feel most uniquely myself. As Woolf, the most fervent of readers, wrote, a book “splits us into two parts as we read,” for “the state of reading consists in the complete elimination of the ego,” while promising “perpetual union” with another mind.

    Bibliotherapy is a very broad term for the ancient practice of encouraging reading for therapeutic effect. The first use of the term is usually dated to a jaunty 1916 article in The Atlantic Monthly, “A Literary Clinic.” In it, the author describes stumbling upon a “bibliopathic institute” run by an acquaintance, Bagster, in the basement of his church, from where he dispenses reading recommendations with healing value. “Bibliotherapy is…a new science,” Bagster explains. “A book may be a stimulant or a sedative or an irritant or a soporific. The point is that it must do something to you, and you ought to know what it is. A book may be of the nature of a soothing syrup or it may be of the nature of a mustard plaster.” To a middle-aged client with “opinions partially ossified,” Bagster gives the following prescription: “You must read more novels. Not pleasant stories that make you forget yourself. They must be searching, drastic, stinging, relentless novels.” (George Bernard Shaw is at the top of the list.) Bagster is finally called away to deal with a patient who has “taken an overdose of war literature,” leaving the author to think about the books that “put new life into us and then set the life pulse strong but slow.”

    Today, bibliotherapy takes many different forms, from literature courses run for prison inmates to reading circles for elderly people suffering from dementia. Sometimes it can simply mean one-on-one or group sessions for “lapsed” readers who want to find their way back to an enjoyment of books.

    Berthoud and Elderkin trace the method of bibliotherapy all the way back to the Ancient Greeks, “who inscribed above the entrance to a library in Thebes that this was a ‘healing place for the soul.’ ” The practice came into its own at the end of the nineteenth century, when Sigmund Freud began using literature during psychoanalysis sessions. After the First World War, traumatized soldiers returning home from the front were often prescribed a course of reading. “Librarians in the States were given training on how to give books to WWI vets, and there’s a nice story about Jane Austen’s novels being used for bibliotherapeutic purposes at the same time in the U.K.,” Elderkin says. Later in the century, bibliotherapy was used in varying ways in hospitals and libraries, and has more recently been taken up by psychologists, social and aged-care workers, and doctors as a viable mode of therapy.

    For all avid readers who have been self-medicating with great books their entire lives, it comes as no surprise that reading books can be good for your mental health and your relationships with others, but exactly why and how is now becoming clearer, thanks to new research on reading’s effects on the brain. Since the discovery, in the mid-nineties, of “mirror neurons”—neurons that fire in our brains both when we perform an action ourselves and when we see an action performed by someone else—the neuroscience of empathy has become clearer. A 2011 study published in the Annual Review of Psychology, based on analysis of fMRI brain scans of participants, showed that, when people read about an experience, they display stimulation within the same neurological regions as when they go through that experience themselves. We draw on the same brain networks when we’re reading stories and when we’re trying to guess at another person’s feelings.

    Other studies published in 2006 and 2009 showed something similar—that people who read a lot of fiction tend to be better at empathizing with others (even after the researchers had accounted for the potential bias that people with greater empathetic tendencies may prefer to read novels). And, in 2013, an influential study published in Science found that reading literary fiction (rather than popular fiction or literary nonfiction) improved participants’ results on tests that measured social perception and empathy, which are crucial to “theory of mind”: the ability to guess with accuracy what another human being might be thinking or feeling, a skill humans only start to develop around the age of four.

    But not everybody agrees with this characterization of fiction reading as having the ability to make us behave better in real life. In her 2007 book, “Empathy and the Novel,” Suzanne Keen takes issue with this “empathy-altruism hypothesis,” and is skeptical about whether empathetic connections made while reading fiction really translate into altruistic, prosocial behavior in the world. She also points out how hard it is to really prove such a hypothesis. “Books can’t make change by themselves—and not everyone feels certain that they ought to,” Keen writes. “As any bookworm knows, readers can also seem antisocial and indolent. Novel reading is not a team sport.” Instead, she urges, we should enjoy what fiction does give us, which is a release from the moral obligation to feel something for invented characters—as you would for a real, live human being in pain or suffering—which paradoxically means readers sometimes “respond with greater empathy to an unreal situation and characters because of the protective fictionality.” And she wholeheartedly supports the personal health benefits of an immersive experience like reading, which “allows a refreshing escape from ordinary, everyday pressures.”

    #Bibliothérapie #Lecture #Romans #Psychologie #Empathie

  • James Charles and the Odd Fascination of the YouTube Beauty Wars | The New Yorker
    https://www.newyorker.com/culture/culture-desk/the-odd-fascination-of-the-youtube-beauty-wars

    Watching Westbrook’s video, I might have felt boredom (forty-three minutes?), but, instead, I felt the excitement that must overwhelm an anthropologist discovering a lost culture, obscure but oddly fascinating, with its own dramas, alliances, and enmities. Added to this effect was the comedy of the gaping chasm between the flimsiness of the conflict and its melodramatic presentation. Speaking directly to the camera, her hair and skin smooth and gleaming and her legs drawn up to her chest, Westbrook’s tone often seems more appropriate for a bereavement support group than a skirmish kindled by a supplement sponsorship. At one point, she claims that she feels betrayed because she and her husband helped Charles with business decisions for years, without expecting payment in return. “Life will never stop being painful,” she says. “No matter where in the world you are, no matter your circumstances, you are always going to experience heartbreak, and that’s part of being human.” Viewers responded enthusiastically. “Tati is no longer a beauty guru… she’s a freaking legendary life guru,” a fan wrote, in a comment that received a hundred and seventy-four thousand likes. In response, Charles came out with his own YouTube statement, in which he appears weepy and makeup-less, apologizes in vague terms to Westbrook and her husband for “everything I have put you through over the last few weeks,” and promises, in possibly even vaguer terms, to “continue to learn and grow every single day.” (He also said that he didn’t receive any payment for his SugarBearHair promotion and instead did it as a favor to the company; SugarBearHair, he said, had recently given him an artist pass when he felt “unsafe” in the less secure V.I.P. area at the Coachella music festival—the traditional ground zero for influencer drama.)

    In an Instagram post from the Met Gala earlier in the week, Charles had written, “Being invited to such an important event like the ball is such an honor and a step forward in the right direction for influencer representation in the media and I am so excited to be a catalyst.” His suggestion that influencers are a marginalized group that deserves affirmative-action-style media attention was justifiably met with derision, but it did evoke the strange, liminal position that they occupy. On the one hand, people like Charles and Westbrook—so-called civilians who have amassed millions of followers through a combination of relentless vlogging and a savvily fashioned persona—now wield enormous financial power by using their accounts to promote brands. (One report predicts that the influencer economy will be worth ten billion dollars by 2020; Instagram recently partnered with several prominent influencers to test out a program that would enable direct sales on the social-media platform.) On the other hand, influencers’ power relies on their relatability. (“I want to show you guys that, no matter who you are, you can make it,” Westbrook says, feelingly, toward the end of her “Bye sister . . .” video. “I had freaking nothing, nothing, when I started out.”) Traditional celebrities serve as powerful marketing tools precisely because, though we are enticed by the fantasy that they offer, we understand that we could never really be like them. With influencers, conversely, it feels like, with a little help and a little of their product, we could be. Influencers: they’re just like us.

    An influencer is, by definition, a creature of commerce. Unlike with a traditional celebrity, there is no creative project necessary to back up the shilling of products (say, a movie franchise used to promote merchandise)—the shilling is the project. But, paradoxically, the commercial sway that influencers hold over their fans depends on their distinctive authenticity: the sense that they are just ordinary people who happen to be recommending a product that they enjoy . Charles’s sin, according to Westbrook, was trading their friendship for lucre (or at least a Coachella pass). “My relationship with James Charles is not transactional,” Westbrook says in her video. “I have not asked him for a penny, I have never been on his Instagram.” Railing against Charles’s SugarBearHair sponsored post, she continues, “You say you don’t like the brand. You say that you’re the realest, that you can’t be bought. Well, you just were.” Later in the video, she takes on a Holden Caulfield-like tone: “You should have walked away. You should have held on to your integrity. You’re a phony.” She, herself, she claims, would never pay anyone to promote her beauty supplement in a sponsored post: “My product is good enough on its own. We’re selling like hot cakes.” Indeed, one shouldn’t underestimate the value that authenticity, or at least a performance of it, carries in the influencer marketplace. Since “Bye sister . . .” was posted, it has been viewed a staggering forty-three million times, and Westbrook has gained three million subscribers. Charles has lost roughly the same number.

    #Culture_numérique #Influenceurs

  • The Urgent Quest for Slower, Better News | The New Yorker
    https://www.newyorker.com/culture/annals-of-inquiry/the-urgent-quest-for-slower-better-news

    In 2008, the Columbia Journalism Review published an article with the headline “Overload!,” which examined news fatigue in “an age of too much information.” When “Overload!” was published, Blackberrys still dominated the smartphone market, push notifications hadn’t yet to come to the iPhone, retweets weren’t built into Twitter, and BuzzFeed News did not exist. Looking back, the idea of suffering from information overload in 2008 seems almost quaint. Now, more than a decade later, a fresh reckoning seems to be upon us. Last year, Tim Cook, the chief executive officer of Apple, unveiled a new iPhone feature, Screen Time, which allows users to track their phone activity. During an interview at a Fortune conference, Cook said that he was monitoring his own usage and had “slashed” the number of notifications he receives. “I think it has become clear to all of us that some of us are spending too much time on our devices,” Cook said.

    It is worth considering how news organizations have contributed to the problems Newport and Cook describe. Media outlets have been reduced to fighting over a shrinking share of our attention online; as Facebook, Google, and other tech platforms have come to monopolize our digital lives, news organizations have had to assume a subsidiary role, relying on those sites for traffic. That dependence exerts a powerful influence on which stories that are pursued, how they’re presented, and the speed and volume at which they’re turned out. In “World Without Mind: the Existential Threat of Big Tech,” published in 2017, Franklin Foer, the former editor-in-chief of The New Republic, writes about “a mad, shameless chase to gain clicks through Facebook” and “a relentless effort to game Google’s algorithms.” Newspapers and magazines have long sought to command large readerships, but these efforts used to be primarily the province of circulation departments; newsrooms were insulated from these pressures, with little sense of what readers actually read. Nowadays, at both legacy news organizations and those that were born online, audience metrics are everywhere. At the Times, everyone in the newsroom has access to an internal, custom-built analytics tool that shows how many people are reading each story, where those people are coming from, what devices they are using, how the stories are being promoted, and so on. Additional, commercially built audience tools, such as Chartbeat and Google Analytics, are also widely available. As the editor of newyorker.com, I keep a browser tab open to Parse.ly, an application that shows me, in real time, various readership numbers for the stories on our Web site.

    Even at news organizations committed to insuring that editorial values—and not commercial interests—determine coverage, it can be difficult for editors to decide how much attention should be paid to these metrics. In “Breaking News: the Remaking of Journalism and Why It Matters,” Alan Rusbridger, the former editor-in-chief of the Guardian, recounts the gradual introduction of metrics into his newspaper’s decision-making processes. The goal, he writes, is to have “a data-informed newsroom, not a data-led one.” But it’s hard to know when the former crosses over into being the latter.

    For digital-media organizations sustained by advertising, the temptations are almost irresistible. Each time a reader comes to a news site from a social-media or search platform, the visit, no matter how brief, brings in some amount of revenue. Foer calls this phenomenon “drive-by traffic.” As Facebook and Google have grown, they have pushed down advertising prices, and revenue-per-click from drive-by traffic has shrunk; even so, it continues to provide an incentive for any number of depressing modern media trends, including clickbait headlines, the proliferation of hastily written “hot takes,” and increasingly homogeneous coverage as everyone chases the same trending news stories, so as not to miss out on the traffic they will bring. Any content that is cheap to produce and has the potential to generate clicks on Facebook or Google is now a revenue-generating “audience opportunity.”

    Among Boczkowski’s areas of research is how young people interact with the news today. Most do not go online seeking the news; instead, they encounter it incidentally, on social media. They might get on their phones or computers to check for updates or messages from their friends, and, along the way, encounter a post from a news site. Few people sit down in the morning to read the print newspaper or make a point of watching the T.V. news in the evening. Instead, they are constantly “being touched, rubbed by the news,” Bockzkowski said. “It’s part of the environment.”

    A central purpose of journalism is the creation of an informed citizenry. And yet––especially in an environment of free-floating, ambient news––it’s not entirely clear what it means to be informed. In his book “The Good Citizen,” from 1998, Michael Schudson, a sociologist who now teaches at Columbia’s journalism school, argues that the ideal of the “informed citizen”––a person with the time, discipline, and expertise needed to steep him- or herself in politics and become fully engaged in our civic life––has always been an unrealistic one. The founders, he writes, expected citizens to possess relatively little political knowledge; the ideal of the informed citizen didn’t take hold until more than a century later, when Progressive-era reformers sought to rein in the party machines and empower individual voters to make thoughtful decisions. (It was also during this period that the independent press began to emerge as a commercial phenomenon, and the press corps became increasingly professionalized.)

    Schudson proposes a model for citizenship that he believes to be more true to life: the “monitorial citizen”—a person who is watchful of what’s going on in politics but isn’t always fully engaged. “The monitorial citizen engages in environmental surveillance more than information-gathering,” he writes. “Picture parents watching small children at the community pool. They are not gathering information; they are keeping an eye on the scene. They look inactive, but they are poised for action if action is required.” Schudson contends that monitorial citizens might even be “better informed than citizens of the past in that, somewhere in their heads, they have more bits of information.” When the time is right, they will deploy this information––to vote a corrupt lawmaker out of office, say, or to approve an important ballot measure.

    #Journalisme #Médias #Economie_attention

  • The Chaos of Altamont and the Murder of Meredith Hunter | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-chaos-of-altamont-and-the-murder-of-meredith-hunter

    A great deal has been written about Altamont in the years since, but so much of the language around it has the exonerating blush of the passive: the sixties were ending; the Angels were the Angels; it could only happen to the Stones. There may have been larger forces at work, but the attempt to see Altamont as the end of the sixties obscures the extent to which what happened that night had happened, in different ways, many times before, and has happened many times since. “A young black man murdered in the midst of a white crowd by white thugs as white men played their version of black music—it was too much to kiss off as a mere unpleasantness,” Greil Marcus wrote, in 1977. Hunter does not appear in Owens’s photos and he is only a body in “Gimme Shelter.” It is worth returning to that day and trying to see Meredith Hunter again.

    Altamont, fin du rêve hippie ou début de la violence raciste ? Un tournant dans l’histoire US... et celle du rock. Plein de magnifiques photos et une histoire terrible.

    #Musique #Stones #Altamont

  • Dick Dale, the Inventor of Surf Rock, Was a Lebanese-American Kid from Boston
    https://www.newyorker.com/culture/postscript/dick-dale-the-inventor-of-surf-rock-was-a-lebanese-american-kid-from-bost

    Dale died on Saturday, at age eighty-one. It’s perhaps curious, at first glance, that a Lebanese-American kid from Boston invented a genre known as surf rock, but such is Dale’s story. He was born Richard Monsour in 1937; several decades earlier, his paternal grandparents had immigrated to the U.S. from Beirut.

    [...]

    Dale’s work was directly and mightily informed by the Arabic music that he listened to as a child. “My music comes from the rhythm of Arab songs,” Dale told the journalist George Baramki Azar, in 1998. “The darbukkah, along with the wailing style of Arab singing, especially the way they use the throat, creates a very powerful force.”

    • Puisque semi #Paywall :

      Dick Dale, the Inventor of Surf Rock, Was a Lebanese-American Kid from Boston
      Amanda Petrusich, The New-Yorker, le 18 mars 2019

      Like a lot of people in my generation, I heard Dick Dale’s “Misirlou” for the first time in the opening credits of Quentin Tarantino’s “Pulp Fiction.” It was 1994, I was fourteen, and my friend Bobby, who had both a license and a car, had driven us to the fancy movie theatre, the one with the un-ripped seats and slightly artier films. We were aspiring aesthetes who dreamed of one day being described as pretentious; by Thanksgiving, we had made half a dozen trips to see “Pulp Fiction.” Each time “Miserlou” played—and Tarantino lets it roll on, uninterrupted, for over a minute—I gripped my cardboard tub of popcorn a little tighter. I simply could not imagine a cooler way to start a movie. “Misirlou” is only two minutes and fifteen seconds long, all told, but it communicates an extraordinary amount of menace. Dale yelps periodically, as if he’s being hotly pursued. One is left only with the sense that something terrible and great is about to occur.

      Dale died on Saturday, at age eighty-one. It’s perhaps curious, at first glance, that a Lebanese-American kid from Boston invented a genre known as surf rock, but such is Dale’s story. He was born Richard Monsour in 1937; several decades earlier, his paternal grandparents had immigrated to the U.S. from Beirut. Dale bought his first guitar used, for eight dollars, and paid it off twenty-five or fifty cents at a time. He liked Hank Williams’s spare and searching cowboy songs—his stage name is a winking homage to the cheekiness of the country-music circuit—but he was particularly taken by the effervescent and indefatigable drumming of Gene Krupa. His guitar style is rhythmic, prickly, biting: “That’s why I play now with that heavy staccato style like I’m playing drums,” he told the Miami New Times, in 2018. “I actually started playing on soup cans and flower pots while listening to big band.” When he was a senior in high school, his family moved from Massachusetts to El Segundo, California, so that his father, a machinist, could take a job at Howard Hughes’s aerospace company. That’s when Dale started surfing.

      As far as subgenres go, surf rock is fairly specialized: the term refers to instrumental rock music made in the first half of the nineteen-sixties, in southern California, in which reverb-laden guitars approximate, in some vague way, the sound of a crashing wave. Though it is tempting to fold in bands like the Beach Boys, who often sang about surfing, surf rock was wet and gnarly and unconcerned with romance or sweetness. The important part was successfully evincing the sensation of riding atop a rushing crest of water and to capture something about that experience, which was both tense and glorious: man versus sea, man versus himself, man versus the banality and ugliness of life on land. Its biggest question was: How do we make this thing sound the way that thing feels? Surfing is an alluring sport in part because it combines recklessness with grace. Dale’s music did similar work. It was as audacious as it was beautiful.

      For six months, beginning on July 1, 1961, Dale set up at the Rendezvous Ballroom, an old dance hall on the Balboa Peninsula, in Newport Beach, and tried to bring the wildness of the Pacific Ocean inside. His song “Let’s Go Trippin’,” which he started playing that summer, is now widely considered the very first surf-rock song. He recorded it in September, and it reached No. 60 on the Hot 100. His shows at the Rendezvous were often referred to as stomps, and they routinely sold out. It is hard not to wonder now what it must have felt like in that room: the briny air, a bit of sand in everyone’s hair, Dale shredding so loud and so hard that the windows rattled. He was messing around with reverb and non-Western scales, ideas that had not yet infiltrated rock music in any meaningful way. Maybe you took a beer outside and let his guitar fade into the sound of the surf. Maybe you stood up close, near a speaker, and felt every bone in your body clack together.

      Dale’s work was directly and mightily informed by the Arabic music that he listened to as a child. “My music comes from the rhythm of Arab songs,” Dale told the journalist George Baramki Azar, in 1998. “The darbukkah, along with the wailing style of Arab singing, especially the way they use the throat, creates a very powerful force.”

      Dale was left-handed, and he preferred to play a custom-made Fender Stratocaster guitar at an indecent volume. (After he exploded enough amplifiers, Fender also made him a custom amplifier—the Dick Dale Dual Showman.) His version of “Misirlou” is gorgeously belligerent. Though it feels deeply American—it is so heavy with the energy of teen-agers, hot rods, and wide suburban boulevards—“Misirlou” is in fact an eastern Mediterranean folk song. The earliest recorded version is Greek, from 1927, and it was performed in a style known as rebetiko, itself a complex mélange of Orthodox chanting, indigenous Greek music, and the Ottoman songs that took root in Greek cities during the occupation. (A few years back, I spent some time travelling through Greece for a Times Magazine story about indigenous-Greek folk music; when I heard “Misirlou” playing from a 78-r.p.m. record on a gramophone on the outskirts of Athens—a later, slower version, recorded by an extraordinary oud player named Anton Abdelahad—I nearly choked on my cup of wine.)

      That a song written at least a century before and thousands of miles away could leave me quaking in a movie theatre in suburban New York City in 1994 is so plainly miraculous and wonderful—how do we not toast Dale for being the momentary keeper of such a thing? He eventually released nine studio albums, beginning in 1962 and ending in 2001. (In 2019, he was still touring regularly and had new dates scheduled for this spring and summer.) There’s some footage of Dale playing “Misirlou” on “Later…with Jools Holland,” in 1996, when he was nearly sixty years old. His hair has thinned, and he’s wearing a sweatband across his forehead. A feathery earring hangs from one ear. The dude is going for it in a big way. It feels like a plume of smoke is about to start rising from the strings of his guitar. His fingers never stop moving. It’s hard to see the faces of the audience members, but I like to think that their eyes were wide, and they were thinking of the sea.

      Amanda Petrusich is a staff writer at The New Yorker and the author of, most recently, “Do Not Sell at Any Price: The Wild, Obsessive Hunt for the World’s Rarest 78rpm Records.”

    • Dale’s work was directly and mightily informed by the Arabic music that he listened to as a child. “My music comes from the rhythm of Arab songs,” Dale told the journalist George Baramki Azar, in 1998. “The darbukkah, along with the wailing style of Arab singing, especially the way they use the throat, creates a very powerful force.”

  • l’histgeobox : Soul Train ou l’émission la plus branchée d’Amérique.
    http://lhistgeobox.blogspot.com/2019/03/soul-train-ou-lemission-la-plus.html

    Lorsque Soul Train déferle sur les petits écrans américains, les programmes musicaux ne manquent pas ; certains d’entre eux connaissent même un succès prodigieux à l’instar du Ed Sullivan Show ou d’American Bandstand qu’anime Dick Clark depuis 1952. (4) Or, dans tous ces programmes, seuls les artistes noirs dont la musique se trouve en tête des classements peuvent espérer se produire sur les plateaux de télés. Dans ces conditions, de nombreux musiciens, pourtant très populaires, n’ont jamais droit de cité. De la même manière, le public de danseurs présents lors des enregistrements est alors exclusivement composé de jeunes blancs. Une situation somme toute peu surprenante si l’on considère que la discrimination raciale reste omniprésente dans les têtes, bien qu’officiellement proscrite par la loi. Pour vendre des disques, certaines compagnies n’hésitent alors pas à « blanchir » les pochettes des artistes soul ou rythmn and blues (le « Otis blue » d’Otis Redding est un des exemples les plus connus).

  • Do We Write Differently on a Screen? | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/do-we-write-differently-on-a-screen

    But, before that, I published my first short novel, “Tongues of Flame.” I continued to write fiction by hand and then type it up. But, at least, once it was typed, you could edit on a screen. What a difference that was! What an invitation to obsession! Hitherto, there was a limit to how many corrections you could make by hand. There was only so much space on the paper. It was discouraging—typing something out time after time, to make more and more corrections. You learned to be satisfied with what you had. Now you could go on changing things forever. I learned how important it was to keep a copy of what I had written first, so as to remember what I had meant in the beginning. Sometimes it turned out to be better than the endlessly edited version.

    We had personal computers at this point, but I still wrote fiction by hand. The mental space feels different when you work with paper. It is quieter. A momentum builds up, a spell between page and hand and eye. I like to use a nice pen and see the page slowly fill. But, for newspaper articles and translations, I now worked straight onto the computer. Which was more frenetic, nervy. The writing was definitely different. But more playful, too. You could move things around. You could experiment so easily. I am glad the computer wasn’t available when I started writing. I might have been overwhelmed by the possibilities. But once you know what you’re doing, the facility of the computer is wonderful.

    Then e-mail arrived and changed everything. First, you would only hook the computer up through your landline phone a couple of times a day, as if there were a special moment to send and receive mail. Then came the permanent connection. Finally, the wireless, and, of course, the Internet. In the space of perhaps ten years, you passed from waiting literally months for a decision on something that you’d written, or simply for a reaction from a friend or an agent, to expecting a reaction immediately. Whereas in the past you checked your in-box once a day, now you checked every five minutes.

    And now you could write an article for The Guardian or the New York Times as easily as you could write it for L’Arena di Verona. Write it and expect a response in hours. In minutes. You write the first chapter of a book and send it at once to four or five friends. Hoping they’d read it at once. It’s impossible to exaggerate how exciting this was, at first, and how harmful to the spirit. You, everybody, are suddenly incredibly needy of immediate feedback. A few more years and you were publishing regularly online for The New York Review of Books. And, hours after publication, you could know how many people were reading the piece. Is it a success? Shall I follow up with something similar?

    While you sit at your computer now, the world seethes behind the letters as they appear on the screen. You can toggle to a football match, a parliamentary debate, a tsunami. A beep tells you that an e-mail has arrived. WhatsApp flashes on the screen. Interruption is constant but also desired. Or at least you’re conflicted about it. You realize that the people reading what you have written will also be interrupted. They are also sitting at screens, with smartphones in their pockets. They won’t be able to deal with long sentences, extended metaphors. They won’t be drawn into the enchantment of the text. So should you change the way you write accordingly? Have you already changed, unwittingly?

    Or should you step back? Time to leave your computer and phone in one room, perhaps, and go and work silently on paper in another. To turn off the Wi-Fi for eight hours. Just as you once learned not to drink everything in the hotel minibar, not to eat too much at free buffets, now you have to cut down on communication. You have learned how compulsive you are, how fragile your identity, how important it is to cultivate a little distance. And your only hope is that others have learned the same lesson. Otherwise, your profession, as least as you thought of it, is finished.

    Tim Parks, a novelist and essayist, is the author of “The Novel: A Survival Skill” and “Where I’m Reading From: The Changing World of Books.”

    #Ecriture #Ordinateur #Edition

  • “The American Meme” Records the Angst of Social-Media Influencers | The New Yorker
    https://www.newyorker.com/culture/culture-desk/the-american-meme-a-new-netflix-documentary-records-the-angst-of-social-m

    The new Netflix documentary “The American Meme,” directed by Bert Marcus, offers a chilling glimpse into the lives of social-media influencers, tracking their paths to online celebrity, their attempts to keep it, and their fear of losing it. Early on in the film, the pillowy-lipped model Emily Ratajkowski (twenty million Instagram followers and counting), who first became a viral sensation when, in 2013, she appeared bare-breasted in Robin Thicke’s “Blurred Lines” video, attempts to address a popular complaint raised against social-media celebrities. “There’s the attention argument,” she says, as images of her posing in lingerie and swimwear appear on the screen. “That we’re doing it just for attention . . . And I say, what’s wrong with attention?” “The American Meme” can be seen, at least partly, as a response to Ratajkowski’s question. It’s true that the model, with her superior bone structure, lush curves, and preternatural knack for packaging her God-given gifts into an enticingly consistent product, is presented to us in the limited capacity of a talking head, and so the illusion of a perfect influencer life—in which attention is easily attracted and never worried over—can be kept. (“Privacy is dead now,” Ratajkowski says, with the offhanded flippancy of someone who is only profiting from this new reality. “Get over it.”) But what is fascinating, and valuable, about “The American Meme” is its ability to reveal the desperation, loneliness, and sheer Sisyphean tedium of ceaselessly chasing what will most likely end up being an ever-diminishing share of the online-attention economy.

    Khaled, his neck weighted with ropes of gold and diamonds, is one of the lucky predators of the particular jungle we’re living in, but Bichutsky isn’t so sure whether he’s going to maintain his own alpha position. “I’m not going to last another year,” he moans, admitting that he’s been losing followers, and that “everyone gets old and ugly one day.” Even when you’re a success, like Khaled, the hustle is grindingly boring: most of it, in the end, consists of capturing Snaps of things like your tater-tot lunch as you shout, “We the best.” And, clearly, not everyone is as blessed as the social-media impresario. During one montage, viral figures like the “Damn, Daniel” boy, “Salt Bae,” and “Chewbacca Mask Lady” populate the screen, and Ratajkowski muses on these flash-in-the-pan meme sensations: “In three or four days, does anyone remember who that person is? I don’t know.”

    The idea of achieving some sort of longevity, or at least managing to cash in on one’s viral hit, is one that preoccupies the influencers featured in “The American Meme.” “I’m thirty; pray for me,” Furlan mutters, dryly, from her spot posing on her bare living-room floor. In that sense, Paris Hilton, an executive producer of the film and also one of its subjects, is the model everyone is looking to. Hilton has managed to continue playing the game by solidifying her brand—that of a ditsy, sexy, spoiled heiress. Rather than promoting others’ products, like most influencers, she has yoked her fame to merchandise of her own: a best-selling perfume line, pet products, clothes, a lucrative d.j. career, and on and on.

    #Influenceurs #Instagram #Culture_numérique

  • Bob Dylan’s Masterpiece, “Blood on the Tracks,” Is Still Hard to Find | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/bob-dylans-masterpiece-is-still-hard-to-find

    In September, 1974, Bob Dylan spent four days in the old Studio A, his favorite recording haunt in Manhattan, and emerged with the greatest, darkest album of his career. It is a ten-song study in romantic devastation, as beautiful as it is bleak, worthy of comparison with Schubert’s “Winterreise.” Yet the record in question—“Blood on the Tracks”—has never officially seen the light of day. The Columbia label released an album with that title in January, 1975, but Dylan had reworked five of the songs in last-minute sessions in Minnesota, resulting in a substantial change of tone. Mournfulness and wistfulness gave way to a feisty, festive air. According to Andy Gill and Kevin Odegard, the authors of the book “A Simple Twist of Fate: Bob Dylan and the Making of ‘Blood on the Tracks,’ ” from 2004, Dylan feared a commercial failure. The revised “Blood” sold extremely well, reaching the top of the Billboard album chart, and it ended talk of Dylan’s creative decline. It was not, however, the masterwork of melancholy that he created in Studio A.

    Ultimately, the long-running debate over the competing incarnations of “Blood on the Tracks” misses the point of what makes this artist so infinitely interesting, at least for some of us. Jeff Slate, who wrote liner notes for “More Blood, More Tracks,” observes that Dylan’s work is always in flux. The process that is documented on these eighty-seven tracks is not one of looking for the “right” take; it’s the beginning of an endless sequence of variations, which are still unfolding on his Never-Ending Tour.

    #Bob_Dylan #Musique