• Slack Is the Right Tool for the Wrong Way to Work | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/slack-is-the-right-tool-for-the-wrong-way-to-work

    Though Slack improved the areas where e-mail was lacking in an age of high message volume, it simultaneously amplified the rate at which this interaction occurs. Data gathered by the software firm RescueTime estimate that employees who use Slack check communications tools more frequently than non-users, accessing them once every five minutes on average—an absurdly high rate of interruption. Neuroscientists and psychologists teach us that our attention is fundamentally single-tasked, and switching it from one target to another is detrimental to productivity. We’re simply not wired to monitor an ongoing stream of unpredictable communication at the same time that we’re trying to also finish actual work. E-mail introduced this problem of communication-driven distraction, but Slack pushed it to a new extreme. We both love and hate Slack because this company built the right tool for the wrong way to work.

    I do not dislike Slack as much as people assume given that I wrote a book titled “Deep Work,” which advocates for the importance of long, undistracted stretches of work. The acceleration of interruption is a problem, but e-mail has its limitations, so it makes sense that companies committed to ad-hoc messaging as their central organizing principle would want to try Slack. If this tool represented the culmination of our attempts to figure out how to best work together in a digital age, I’d be more concerned, but Slack seems to be more transient. It’s a short-term optimization of our first hasty attempts to make sense of a high-tech professional world that will be followed by more substantial revolutions. The future of office work won’t be found in continuing to reduce the friction involved in messaging but, instead, in figuring out how to avoid the need to send so many messages in the first place.

    #Slack #Collaboration #Travail

  • The Second Act of Social-Media Activism | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism

    Un article passionnant qui part des analyses de Zeynep Tufekci pour les reconsidérer à partir des mouvements plus récents.

    Some of this story may seem familiar. In “Twitter and Tear Gas: The Power and Fragility of Networked Protest,” from 2017, the sociologist Zeynep Tufekci examined how a “digitally networked public sphere” had come to shape social movements. Tufekci drew on her own experience of the 2011 Arab uprisings, whose early mobilization of social media set the stage for the protests at Gezi Park, in Istanbul, the Occupy action, in New York City, and the Black Lives Matter movement, in Ferguson. For Tufekci, the use of the Internet linked these various, decentralized uprisings and distinguished them from predecessors such as the nineteen-sixties civil-rights movement. Whereas “older movements had to build their organizing capacity first,” Tufekci argued, “modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first protest or march.”

    The speed afforded by such protest is, however, as much its peril as its promise. After a swift expansion, spontaneous movements are often prone to what Tufekci calls “tactical freezes.” Because they are often leaderless, and can lack “both the culture and the infrastructure for making collective decisions,” they are left with little room to adjust strategies or negotiate demands. At a more fundamental level, social media’s corporate infrastructure makes such movements vulnerable to coöptation and censorship. Tufekci is clear-eyed about these pitfalls, even as she rejects the broader criticisms of “slacktivism” laid out, for example, by Evgeny Morozov’s “The Net Delusion,” from 2011.

    “Twitter and Tear Gas” remains trenchant about how social media can and cannot enact reform. But movements change, as does technology. Since Tufekci’s book was published, social media has helped represent—and, in some cases, helped organize—the Arab Spring 2.0, France’s “Yellow Vest” movement, Puerto Rico’s RickyLeaks, the 2019 Iranian protests, the Hong Kong protests, and what we might call the B.L.M. uprising of 2020. This last event, still ongoing, has evinced a scale, creativity, and endurance that challenges those skeptical of the Internet’s ability to mediate a movement. As Tufekci notes in her book, the real-world effects of Occupy, the Women’s March, and even Ferguson-era B.L.M. were often underwhelming. By contrast, since George Floyd’s death, cities have cut billions of dollars from police budgets; school districts have severed ties with police; multiple police-reform-and-accountability bills have been introduced in Congress; and cities like Minneapolis have vowed to defund policing. Plenty of work remains, but the link between activism, the Internet, and material action seems to have deepened. What’s changed?

    The current uprisings slot neatly into Tufekci’s story, with one exception. As the flurry of digital activism continues, there is no sense that this movement is unclear about its aims—abolition—or that it might collapse under a tactical freeze. Instead, the many protest guides, syllabi, Webinars, and the like have made clear both the objectives of abolition and the digital savvy of abolitionists. It is a message so legible that even Fox News grasped it with relative ease. Rachel Kuo, an organizer and scholar of digital activism, told me that this clarity has been shaped partly by organizers who increasingly rely on “a combination of digital platforms, whether that’s Google Drive, Signal, Messenger, Slack, or other combinations of software, for collaboration, information storage, resource access, and daily communications.” The public tends to focus, understandably, on the profusion of hashtags and sleek graphics, but Kuo stressed that it was this “back end” work—an inventory of knowledge, a stronger sense of alliance—that has allowed digital activism to “reflect broader concerns and visions around community safety, accessibility, and accountability.” The uprisings might have unfolded organically, but what has sustained them is precisely what many prior networked protests lacked: preëxisting organizations with specific demands for a better world.

    What’s distinct about the current movement is not just the clarity of its messaging, but its ability to convey that message through so much noise. On June 2nd, the music industry launched #BlackoutTuesday, an action against police brutality that involved, among other things, Instagram and Facebook users posting plain black boxes to their accounts. The posts often included the hashtag #BlackLivesMatter; almost immediately, social-media users were inundated with even more posts, which explained why using that hashtag drowned out crucial information about events and resources with a sea of mute boxes. For Meredith Clark, a media-studies professor at the University of Virginia, the response illustrated how the B.L.M. movement had honed its ability to stick to a program, and to correct those who deployed that program naïvely. In 2014, many people had only a thin sense of how a hashtag could organize actions or establish circles of care. Today, “people understand what it means to use a hashtag,” Clark told me. They use “their own social media in a certain way to essentially quiet background noise” and “allow those voices that need to connect with each other the space to do so.” The #BlackoutTuesday affair exemplified an increasing awareness of how digital tactics have material consequences.

    These networks suggest that digital activism has entered a second act, in which the tools of the Internet have been increasingly integrated into the hard-won structure of older movements. Though, as networked protest grows in scale and popularity, it still risks being hijacked by the mainstream. Any urgent circulation of information—the same memes filtering through your Instagram stories, the same looping images retweeted into your timeline—can be numbing, and any shift in the Overton window means that hegemony drifts with it.

    In “Twitter and Tear Gas,” Tufekci wrote, “The Black Lives Matter movement is young, and how it will develop further capacities remains to be seen.” The movement is older now. It has developed its tactics, its messaging, its reach—but perhaps its most striking new capacity is a sharper recognition of social media’s limits. “This movement has mastered what social media is good for,” Deva Woodly, a professor of politics at the New School, told me. “And that’s basically the meme: it’s the headline.” Those memes, Woodley said, help “codify the message” that leads to broader, deeper conversations offline, which, in turn, build on a long history of radical pedagogy. As more and more of us join those conversations, prompted by the words and images we see on our screens, it’s clear that the revolution will not be tweeted—at least, not entirely.

    #Activisme_connecté #Black_lives_matter #Zeynep_Tufekci #Mèmes #Hashtag_movments #Médias_sociaux

  • The Walkman, Forty Years On | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-walkman-forty-years-on

    Even prior to extended quarantines, lockdowns, and self-isolation, it was hard to imagine life without the electronic escapes of noise-cancelling earbuds, smartphones, and tablets. Today, it seems impossible. Of course, there was most certainly a before and after, a point around which the cultural gravity of our plugged-in-yet-tuned-out modern lives shifted. Its name is Walkman, and it was invented, in Japan, in 1979. After the Walkman arrived on American shores, in June of 1980, under the temporary name of Soundabout, our days would never be the same.

    Up to this point, music was primarily a shared experience: families huddling around furniture-sized Philcos; teens blasting tunes from automobiles or sock-hopping to transistor radios; the bar-room juke; break-dancers popping and locking to the sonic backdrop of a boom box. After the Walkman, music could be silence to all but the listener, cocooned within a personal soundscape, which spooled on analog cassette tape. The effect was shocking even to its creators. “Everyone knows what headphones sound like today,” the late Sony designer Yasuo Kuroki wrote in a Japanese-language memoir, from 1990. “But at the time, you couldn’t even imagine it, and then suddenly Beethoven’s Fifth is hammering between your ears.”

    Sony’s chairman at the time, the genial Akio Morita, was so unsure of the device’s prospects that he ordered a manufacturing run of only thirty thousand, a drop in the bucket compared to such established lines as Trinitron televisions. Initially, he seemed right to be cautious. The Walkman débuted in Japan to near silence. But word quickly spread among the youth of Tokyo about a strange new device that let you carry a soundtrack out of your bedroom, onto commuter trains, and into city streets. Within a year and a half of the appearance of the Walkman, Sony would produce and sell two million of them.

    for the Walkman’s growing numbers of users, isolation was the whole point. “With the advent of the Sony Walkman came the end of meeting people,” Susan Blond, a vice-president at CBS Records, told the Washington Post in 1981. “It’s like a drug: You put the Walkman on and you blot out the rest of the world.” It didn’t take long for academics to coin a term for the phenomenon. The musicologist Shuhei Hosokawa called it “the Walkman effect.”

    There had been popular electronic gadgets before, such as the pocket-sized transistor radios of the fifties, sixties, and seventies. But the Walkman was in another league. Until this point, earphones had been associated with hearing impairment, geeky technicians manning sonar stations, or basement-dwelling hi-fi fanatics. Somehow, a Japanese company had made the high-tech headgear cool.

    “Steve’s point of reference was Sony at the time,” his successor at Apple, John Sculley, recalled. “He really wanted to be Sony. He didn’t want to be IBM. He didn’t want to be Microsoft. He wanted to be Sony.”

    Jobs would get his wish with the début of the iPod, in 2001. It wasn’t the first digital-music player—a South Korean firm had introduced one back in 1998. (That Sony failed to exploit the niche, in spite of having created listening-on-the-go and even owning its own record label, was a testament to how Morita’s unexpected retirement after a stroke, in 1993, hobbled the corporation.) But Apple’s was the most stylish to date, bereft of the complicated and button-festooned interfaces of its competitors, finished in sleek pearlescent plastic and with a satisfying heft that hinted at powerful technologies churning inside. Apple also introduced a tantalizing new method of serving up music: the shuffle, which let listeners remix entire musical libraries into never-ending audio backdrops for their lives. Once again, city streets were the proving ground for this evolution of portable listening technology. “I was on Madison [Ave],” Jobs told Newsweek, in 2004, “and it was, like, on every block, there was someone with white headphones, and I thought, ‘Oh, my God, it’s starting to happen.’ ”

    #Walkman #Sony #Steve_Jobs #Musique #Isolement

  • A Day of Reckoning for Michael Jackson with “Leaving Neverland” | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/a-day-of-reckoning-for-michael-jackson-with-leaving-neverland

    It is hideous, but true, that allegations of this sort have historically been treated differently when the accused is a virtuosic and deeply beloved male performer: Miles Davis allegedly beat his wives; Jimmy Page allegedly had a relationship with a fourteen-year-old girl; the late rapper XXXTentacion allegedly battered his ex-girlfriend when she was pregnant; Chuck Berry was convicted of transporting a minor across state lines for “immoral purposes”; and on, and on, and on, until the entire history of Western music collapses in a haze of abuse and transgression, unable to survive any sort of moral dragnet

  • Un texte de l’écrivain #Jonathan_Franzen, qui fait beaucoup jaser... à croire que la collapsologie a mis plus de temps à rejoindre les grand médias aux États-Unis :

    What If We Stopped Pretending ?
    Jonathan Franzen, The New-Yorker, le 8 septembre 2019
    https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending

    On l’ajoute à la troisième compilation :
    https://seenthis.net/messages/680147

    #effondrement #collapsologie #catastrophe #fin_du_monde #it_has_begun #Anthropocène #capitalocène #USA

    Mais aussi aux évaluations et critiques des #actions_individuelles compilées ici :
    https://seenthis.net/messages/794181

    Semi #paywall alors :

    “There is infinite hope,” Kafka tells us, “only not for us.” This is a fittingly mystical epigram from a writer whose characters strive for ostensibly reachable goals and, tragically or amusingly, never manage to get any closer to them. But it seems to me, in our rapidly darkening world, that the converse of Kafka’s quip is equally true: There is no hope, except for us.

    I’m talking, of course, about climate change. The struggle to rein in global carbon emissions and keep the planet from melting down has the feel of Kafka’s fiction. The goal has been clear for thirty years, and despite earnest efforts we’ve made essentially no progress toward reaching it. Today, the scientific evidence verges on irrefutable. If you’re younger than sixty, you have a good chance of witnessing the radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought. If you’re under thirty, you’re all but guaranteed to witness it.

    If you care about the planet, and about the people and animals who live on it, there are two ways to think about this. You can keep on hoping that catastrophe is preventable, and feel ever more frustrated or enraged by the world’s inaction. Or you can accept that disaster is coming, and begin to rethink what it means to have hope.

    Even at this late date, expressions of unrealistic hope continue to abound. Hardly a day seems to pass without my reading that it’s time to “roll up our sleeves” and “save the planet”; that the problem of climate change can be “solved” if we summon the collective will. Although this message was probably still true in 1988, when the science became fully clear, we’ve emitted as much atmospheric carbon in the past thirty years as we did in the previous two centuries of industrialization. The facts have changed, but somehow the message stays the same.

    Psychologically, this denial makes sense. Despite the outrageous fact that I’ll soon be dead forever, I live in the present, not the future. Given a choice between an alarming abstraction (death) and the reassuring evidence of my senses (breakfast!), my mind prefers to focus on the latter. The planet, too, is still marvelously intact, still basically normal—seasons changing, another election year coming, new comedies on Netflix—and its impending collapse is even harder to wrap my mind around than death. Other kinds of apocalypse, whether religious or thermonuclear or asteroidal, at least have the binary neatness of dying: one moment the world is there, the next moment it’s gone forever. Climate apocalypse, by contrast, is messy. It will take the form of increasingly severe crises compounding chaotically until civilization begins to fray. Things will get very bad, but maybe not too soon, and maybe not for everyone. Maybe not for me.

    Some of the denial, however, is more willful. The evil of the Republican Party’s position on climate science is well known, but denial is entrenched in progressive politics, too, or at least in its rhetoric. The Green New Deal, the blueprint for some of the most substantial proposals put forth on the issue, is still framed as our last chance to avert catastrophe and save the planet, by way of gargantuan renewable-energy projects. Many of the groups that support those proposals deploy the language of “stopping” climate change, or imply that there’s still time to prevent it. Unlike the political right, the left prides itself on listening to climate scientists, who do indeed allow that catastrophe is theoretically avertable. But not everyone seems to be listening carefully. The stress falls on the word theoretically.

    Our atmosphere and oceans can absorb only so much heat before climate change, intensified by various feedback loops, spins completely out of control. The consensus among scientists and policy-makers is that we’ll pass this point of no return if the global mean temperature rises by more than two degrees Celsius (maybe a little more, but also maybe a little less). The I.P.C.C.—the Intergovernmental Panel on Climate Change—tells us that, to limit the rise to less than two degrees, we not only need to reverse the trend of the past three decades. We need to approach zero net emissions, globally, in the next three decades.

    This is, to say the least, a tall order. It also assumes that you trust the I.P.C.C.’s calculations. New research, described last month in Scientific American, demonstrates that climate scientists, far from exaggerating the threat of climate change, have underestimated its pace and severity. To project the rise in the global mean temperature, scientists rely on complicated atmospheric modelling. They take a host of variables and run them through supercomputers to generate, say, ten thousand different simulations for the coming century, in order to make a “best” prediction of the rise in temperature. When a scientist predicts a rise of two degrees Celsius, she’s merely naming a number about which she’s very confident: the rise will be at least two degrees. The rise might, in fact, be far higher.

    As a non-scientist, I do my own kind of modelling. I run various future scenarios through my brain, apply the constraints of human psychology and political reality, take note of the relentless rise in global energy consumption (thus far, the carbon savings provided by renewable energy have been more than offset by consumer demand), and count the scenarios in which collective action averts catastrophe. The scenarios, which I draw from the prescriptions of policy-makers and activists, share certain necessary conditions.

    The first condition is that every one of the world’s major polluting countries institute draconian conservation measures, shut down much of its energy and transportation infrastructure, and completely retool its economy. According to a recent paper in Nature, the carbon emissions from existing global infrastructure, if operated through its normal lifetime, will exceed our entire emissions “allowance”—the further gigatons of carbon that can be released without crossing the threshold of catastrophe. (This estimate does not include the thousands of new energy and transportation projects already planned or under construction.) To stay within that allowance, a top-down intervention needs to happen not only in every country but throughout every country. Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks.

    The actions taken by these countries must also be the right ones. Vast sums of government money must be spent without wasting it and without lining the wrong pockets. Here it’s useful to recall the Kafkaesque joke of the European Union’s biofuel mandate, which served to accelerate the deforestation of Indonesia for palm-oil plantations, and the American subsidy of ethanol fuel, which turned out to benefit no one but corn farmers.

    Finally, overwhelming numbers of human beings, including millions of government-hating Americans, need to accept high taxes and severe curtailment of their familiar life styles without revolting. They must accept the reality of climate change and have faith in the extreme measures taken to combat it. They can’t dismiss news they dislike as fake. They have to set aside nationalism and class and racial resentments. They have to make sacrifices for distant threatened nations and distant future generations. They have to be permanently terrified by hotter summers and more frequent natural disasters, rather than just getting used to them. Every day, instead of thinking about breakfast, they have to think about death.

    Call me a pessimist or call me a humanist, but I don’t see human nature fundamentally changing anytime soon. I can run ten thousand scenarios through my model, and in not one of them do I see the two-degree target being met.

    To judge from recent opinion polls, which show that a majority of Americans (many of them Republican) are pessimistic about the planet’s future, and from the success of a book like David Wallace-Wells’s harrowing “The Uninhabitable Earth,” which was released this year, I’m not alone in having reached this conclusion. But there continues to be a reluctance to broadcast it. Some climate activists argue that if we publicly admit that the problem can’t be solved, it will discourage people from taking any ameliorative action at all. This seems to me not only a patronizing calculation but an ineffectual one, given how little progress we have to show for it to date. The activists who make it remind me of the religious leaders who fear that, without the promise of eternal salvation, people won’t bother to behave well. In my experience, nonbelievers are no less loving of their neighbors than believers. And so I wonder what might happen if, instead of denying reality, we told ourselves the truth.

    First of all, even if we can no longer hope to be saved from two degrees of warming, there’s still a strong practical and ethical case for reducing carbon emissions. In the long run, it probably makes no difference how badly we overshoot two degrees; once the point of no return is passed, the world will become self-transforming. In the shorter term, however, half measures are better than no measures. Halfway cutting our emissions would make the immediate effects of warming somewhat less severe, and it would somewhat postpone the point of no return. The most terrifying thing about climate change is the speed at which it’s advancing, the almost monthly shattering of temperature records. If collective action resulted in just one fewer devastating hurricane, just a few extra years of relative stability, it would be a goal worth pursuing.

    In fact, it would be worth pursuing even if it had no effect at all. To fail to conserve a finite resource when conservation measures are available, to needlessly add carbon to the atmosphere when we know very well what carbon is doing to it, is simply wrong. Although the actions of one individual have zero effect on the climate, this doesn’t mean that they’re meaningless. Each of us has an ethical choice to make. During the Protestant Reformation, when “end times” was merely an idea, not the horribly concrete thing it is today, a key doctrinal question was whether you should perform good works because it will get you into Heaven, or whether you should perform them simply because they’re good—because, while Heaven is a question mark, you know that this world would be better if everyone performed them. I can respect the planet, and care about the people with whom I share it, without believing that it will save me.

    More than that, a false hope of salvation can be actively harmful. If you persist in believing that catastrophe can be averted, you commit yourself to tackling a problem so immense that it needs to be everyone’s overriding priority forever. One result, weirdly, is a kind of complacency: by voting for green candidates, riding a bicycle to work, avoiding air travel, you might feel that you’ve done everything you can for the only thing worth doing. Whereas, if you accept the reality that the planet will soon overheat to the point of threatening civilization, there’s a whole lot more you should be doing.

    Our resources aren’t infinite. Even if we invest much of them in a longest-shot gamble, reducing carbon emissions in the hope that it will save us, it’s unwise to invest all of them. Every billion dollars spent on high-speed trains, which may or may not be suitable for North America, is a billion not banked for disaster preparedness, reparations to inundated countries, or future humanitarian relief. Every renewable-energy mega-project that destroys a living ecosystem—the “green” energy development now occurring in Kenya’s national parks, the giant hydroelectric projects in Brazil, the construction of solar farms in open spaces, rather than in settled areas—erodes the resilience of a natural world already fighting for its life. Soil and water depletion, overuse of pesticides, the devastation of world fisheries—collective will is needed for these problems, too, and, unlike the problem of carbon, they’re within our power to solve. As a bonus, many low-tech conservation actions (restoring forests, preserving grasslands, eating less meat) can reduce our carbon footprint as effectively as massive industrial changes.

    All-out war on climate change made sense only as long as it was winnable. Once you accept that we’ve lost it, other kinds of action take on greater meaning. Preparing for fires and floods and refugees is a directly pertinent example. But the impending catastrophe heightens the urgency of almost any world-improving action. In times of increasing chaos, people seek protection in tribalism and armed force, rather than in the rule of law, and our best defense against this kind of dystopia is to maintain functioning democracies, functioning legal systems, functioning communities. In this respect, any movement toward a more just and civil society can now be considered a meaningful climate action. Securing fair elections is a climate action. Combatting extreme wealth inequality is a climate action. Shutting down the hate machines on social media is a climate action. Instituting humane immigration policy, advocating for racial and gender equality, promoting respect for laws and their enforcement, supporting a free and independent press, ridding the country of assault weapons—these are all meaningful climate actions. To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it.

    And then there’s the matter of hope. If your hope for the future depends on a wildly optimistic scenario, what will you do ten years from now, when the scenario becomes unworkable even in theory? Give up on the planet entirely? To borrow from the advice of financial planners, I might suggest a more balanced portfolio of hopes, some of them longer-term, most of them shorter. It’s fine to struggle against the constraints of human nature, hoping to mitigate the worst of what’s to come, but it’s just as important to fight smaller, more local battles that you have some realistic hope of winning. Keep doing the right thing for the planet, yes, but also keep trying to save what you love specifically—a community, an institution, a wild place, a species that’s in trouble—and take heart in your small successes. Any good thing you do now is arguably a hedge against the hotter future, but the really meaningful thing is that it’s good today. As long as you have something to love, you have something to hope for.

    In Santa Cruz, where I live, there’s an organization called the Homeless Garden Project. On a small working farm at the west end of town, it offers employment, training, support, and a sense of community to members of the city’s homeless population. It can’t “solve” the problem of homelessness, but it’s been changing lives, one at a time, for nearly thirty years. Supporting itself in part by selling organic produce, it contributes more broadly to a revolution in how we think about people in need, the land we depend on, and the natural world around us. In the summer, as a member of its C.S.A. program, I enjoy its kale and strawberries, and in the fall, because the soil is alive and uncontaminated, small migratory birds find sustenance in its furrows.

    There may come a time, sooner than any of us likes to think, when the systems of industrial agriculture and global trade break down and homeless people outnumber people with homes. At that point, traditional local farming and strong communities will no longer just be liberal buzzwords. Kindness to neighbors and respect for the land—nurturing healthy soil, wisely managing water, caring for pollinators—will be essential in a crisis and in whatever society survives it. A project like the Homeless Garden offers me the hope that the future, while undoubtedly worse than the present, might also, in some ways, be better. Most of all, though, it gives me hope for today.

  • Can Reading Make You Happier ? | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/can-reading-make-you-happier

    In a secular age, I suspect that reading fiction is one of the few remaining paths to transcendence, that elusive state in which the distance between the self and the universe shrinks. Reading fiction makes me lose all sense of self, but at the same time makes me feel most uniquely myself. As Woolf, the most fervent of readers, wrote, a book “splits us into two parts as we read,” for “the state of reading consists in the complete elimination of the ego,” while promising “perpetual union” with another mind.

    Bibliotherapy is a very broad term for the ancient practice of encouraging reading for therapeutic effect. The first use of the term is usually dated to a jaunty 1916 article in The Atlantic Monthly, “A Literary Clinic.” In it, the author describes stumbling upon a “bibliopathic institute” run by an acquaintance, Bagster, in the basement of his church, from where he dispenses reading recommendations with healing value. “Bibliotherapy is…a new science,” Bagster explains. “A book may be a stimulant or a sedative or an irritant or a soporific. The point is that it must do something to you, and you ought to know what it is. A book may be of the nature of a soothing syrup or it may be of the nature of a mustard plaster.” To a middle-aged client with “opinions partially ossified,” Bagster gives the following prescription: “You must read more novels. Not pleasant stories that make you forget yourself. They must be searching, drastic, stinging, relentless novels.” (George Bernard Shaw is at the top of the list.) Bagster is finally called away to deal with a patient who has “taken an overdose of war literature,” leaving the author to think about the books that “put new life into us and then set the life pulse strong but slow.”

    Today, bibliotherapy takes many different forms, from literature courses run for prison inmates to reading circles for elderly people suffering from dementia. Sometimes it can simply mean one-on-one or group sessions for “lapsed” readers who want to find their way back to an enjoyment of books.

    Berthoud and Elderkin trace the method of bibliotherapy all the way back to the Ancient Greeks, “who inscribed above the entrance to a library in Thebes that this was a ‘healing place for the soul.’ ” The practice came into its own at the end of the nineteenth century, when Sigmund Freud began using literature during psychoanalysis sessions. After the First World War, traumatized soldiers returning home from the front were often prescribed a course of reading. “Librarians in the States were given training on how to give books to WWI vets, and there’s a nice story about Jane Austen’s novels being used for bibliotherapeutic purposes at the same time in the U.K.,” Elderkin says. Later in the century, bibliotherapy was used in varying ways in hospitals and libraries, and has more recently been taken up by psychologists, social and aged-care workers, and doctors as a viable mode of therapy.

    For all avid readers who have been self-medicating with great books their entire lives, it comes as no surprise that reading books can be good for your mental health and your relationships with others, but exactly why and how is now becoming clearer, thanks to new research on reading’s effects on the brain. Since the discovery, in the mid-nineties, of “mirror neurons”—neurons that fire in our brains both when we perform an action ourselves and when we see an action performed by someone else—the neuroscience of empathy has become clearer. A 2011 study published in the Annual Review of Psychology, based on analysis of fMRI brain scans of participants, showed that, when people read about an experience, they display stimulation within the same neurological regions as when they go through that experience themselves. We draw on the same brain networks when we’re reading stories and when we’re trying to guess at another person’s feelings.

    Other studies published in 2006 and 2009 showed something similar—that people who read a lot of fiction tend to be better at empathizing with others (even after the researchers had accounted for the potential bias that people with greater empathetic tendencies may prefer to read novels). And, in 2013, an influential study published in Science found that reading literary fiction (rather than popular fiction or literary nonfiction) improved participants’ results on tests that measured social perception and empathy, which are crucial to “theory of mind”: the ability to guess with accuracy what another human being might be thinking or feeling, a skill humans only start to develop around the age of four.

    But not everybody agrees with this characterization of fiction reading as having the ability to make us behave better in real life. In her 2007 book, “Empathy and the Novel,” Suzanne Keen takes issue with this “empathy-altruism hypothesis,” and is skeptical about whether empathetic connections made while reading fiction really translate into altruistic, prosocial behavior in the world. She also points out how hard it is to really prove such a hypothesis. “Books can’t make change by themselves—and not everyone feels certain that they ought to,” Keen writes. “As any bookworm knows, readers can also seem antisocial and indolent. Novel reading is not a team sport.” Instead, she urges, we should enjoy what fiction does give us, which is a release from the moral obligation to feel something for invented characters—as you would for a real, live human being in pain or suffering—which paradoxically means readers sometimes “respond with greater empathy to an unreal situation and characters because of the protective fictionality.” And she wholeheartedly supports the personal health benefits of an immersive experience like reading, which “allows a refreshing escape from ordinary, everyday pressures.”

    #Bibliothérapie #Lecture #Romans #Psychologie #Empathie

  • The Chaos of Altamont and the Murder of Meredith Hunter | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-chaos-of-altamont-and-the-murder-of-meredith-hunter

    A great deal has been written about Altamont in the years since, but so much of the language around it has the exonerating blush of the passive: the sixties were ending; the Angels were the Angels; it could only happen to the Stones. There may have been larger forces at work, but the attempt to see Altamont as the end of the sixties obscures the extent to which what happened that night had happened, in different ways, many times before, and has happened many times since. “A young black man murdered in the midst of a white crowd by white thugs as white men played their version of black music—it was too much to kiss off as a mere unpleasantness,” Greil Marcus wrote, in 1977. Hunter does not appear in Owens’s photos and he is only a body in “Gimme Shelter.” It is worth returning to that day and trying to see Meredith Hunter again.

    Altamont, fin du rêve hippie ou début de la violence raciste ? Un tournant dans l’histoire US... et celle du rock. Plein de magnifiques photos et une histoire terrible.

    #Musique #Stones #Altamont

  • Do We Write Differently on a Screen? | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/do-we-write-differently-on-a-screen

    But, before that, I published my first short novel, “Tongues of Flame.” I continued to write fiction by hand and then type it up. But, at least, once it was typed, you could edit on a screen. What a difference that was! What an invitation to obsession! Hitherto, there was a limit to how many corrections you could make by hand. There was only so much space on the paper. It was discouraging—typing something out time after time, to make more and more corrections. You learned to be satisfied with what you had. Now you could go on changing things forever. I learned how important it was to keep a copy of what I had written first, so as to remember what I had meant in the beginning. Sometimes it turned out to be better than the endlessly edited version.

    We had personal computers at this point, but I still wrote fiction by hand. The mental space feels different when you work with paper. It is quieter. A momentum builds up, a spell between page and hand and eye. I like to use a nice pen and see the page slowly fill. But, for newspaper articles and translations, I now worked straight onto the computer. Which was more frenetic, nervy. The writing was definitely different. But more playful, too. You could move things around. You could experiment so easily. I am glad the computer wasn’t available when I started writing. I might have been overwhelmed by the possibilities. But once you know what you’re doing, the facility of the computer is wonderful.

    Then e-mail arrived and changed everything. First, you would only hook the computer up through your landline phone a couple of times a day, as if there were a special moment to send and receive mail. Then came the permanent connection. Finally, the wireless, and, of course, the Internet. In the space of perhaps ten years, you passed from waiting literally months for a decision on something that you’d written, or simply for a reaction from a friend or an agent, to expecting a reaction immediately. Whereas in the past you checked your in-box once a day, now you checked every five minutes.

    And now you could write an article for The Guardian or the New York Times as easily as you could write it for L’Arena di Verona. Write it and expect a response in hours. In minutes. You write the first chapter of a book and send it at once to four or five friends. Hoping they’d read it at once. It’s impossible to exaggerate how exciting this was, at first, and how harmful to the spirit. You, everybody, are suddenly incredibly needy of immediate feedback. A few more years and you were publishing regularly online for The New York Review of Books. And, hours after publication, you could know how many people were reading the piece. Is it a success? Shall I follow up with something similar?

    While you sit at your computer now, the world seethes behind the letters as they appear on the screen. You can toggle to a football match, a parliamentary debate, a tsunami. A beep tells you that an e-mail has arrived. WhatsApp flashes on the screen. Interruption is constant but also desired. Or at least you’re conflicted about it. You realize that the people reading what you have written will also be interrupted. They are also sitting at screens, with smartphones in their pockets. They won’t be able to deal with long sentences, extended metaphors. They won’t be drawn into the enchantment of the text. So should you change the way you write accordingly? Have you already changed, unwittingly?

    Or should you step back? Time to leave your computer and phone in one room, perhaps, and go and work silently on paper in another. To turn off the Wi-Fi for eight hours. Just as you once learned not to drink everything in the hotel minibar, not to eat too much at free buffets, now you have to cut down on communication. You have learned how compulsive you are, how fragile your identity, how important it is to cultivate a little distance. And your only hope is that others have learned the same lesson. Otherwise, your profession, as least as you thought of it, is finished.

    Tim Parks, a novelist and essayist, is the author of “The Novel: A Survival Skill” and “Where I’m Reading From: The Changing World of Books.”

    #Ecriture #Ordinateur #Edition

  • Bob Dylan’s Masterpiece, “Blood on the Tracks,” Is Still Hard to Find | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/bob-dylans-masterpiece-is-still-hard-to-find

    In September, 1974, Bob Dylan spent four days in the old Studio A, his favorite recording haunt in Manhattan, and emerged with the greatest, darkest album of his career. It is a ten-song study in romantic devastation, as beautiful as it is bleak, worthy of comparison with Schubert’s “Winterreise.” Yet the record in question—“Blood on the Tracks”—has never officially seen the light of day. The Columbia label released an album with that title in January, 1975, but Dylan had reworked five of the songs in last-minute sessions in Minnesota, resulting in a substantial change of tone. Mournfulness and wistfulness gave way to a feisty, festive air. According to Andy Gill and Kevin Odegard, the authors of the book “A Simple Twist of Fate: Bob Dylan and the Making of ‘Blood on the Tracks,’ ” from 2004, Dylan feared a commercial failure. The revised “Blood” sold extremely well, reaching the top of the Billboard album chart, and it ended talk of Dylan’s creative decline. It was not, however, the masterwork of melancholy that he created in Studio A.

    Ultimately, the long-running debate over the competing incarnations of “Blood on the Tracks” misses the point of what makes this artist so infinitely interesting, at least for some of us. Jeff Slate, who wrote liner notes for “More Blood, More Tracks,” observes that Dylan’s work is always in flux. The process that is documented on these eighty-seven tracks is not one of looking for the “right” take; it’s the beginning of an endless sequence of variations, which are still unfolding on his Never-Ending Tour.

    #Bob_Dylan #Musique