industryterm:social media data

  • Linguistic red flags from Facebook posts can predict future depression diagnoses — ScienceDaily
    https://www.sciencedaily.com/releases/2018/10/181015150643.htm

    Research finds that the language people use in their Facebook posts can predict a future diagnosis of depression as accurately as the tools clinicians use in medical settings to screen for the disease.

    In any given year, depression affects more than 6 percent of the adult population in the United States — some 16 million people — but fewer than half receive the treatment they need. What if an algorithm could scan social media and point to linguistic red flags of the disease before a formal medical diagnosis had been made?

    Ah oui, ce serait fantastique pour les Big Pharma : la dépression est une maladie complexe, dont les symptômes graves sont souvent confondus avec la déprime qui est un état sychologique que nous connaissons tous. Notre Facebook, couplé avec notre assistant vocal Amazon nous gorgerait de Valium, et tout irait pour le mieux dans le Meilleur des mondes.

    Considering conditions such as depression, anxiety, and PTSD , for example, you find more signals in the way people express themselves digitally."

    For six years, the WWBP, based in Penn’s Positive Psychology Center and Stony Brook’s Human Language Analysis Lab, has been studying how the words people use reflect inner feelings and contentedness. In 2014, Johannes Eichstaedt, WWBP founding research scientist, started to wonder whether it was possible for social media to predict mental health outcomes, particularly for depression.

    “Social media data contain markers akin to the genome,” Eichstaedt explains. “With surprisingly similar methods to those used in genomics, we can comb social media data to find these markers. Depression appears to be something quite detectable in this way; it really changes people’s use of social media in a way that something like skin disease or diabetes doesn’t.”

    Il y a au moins une bonne nouvelle sur la déontologie scientifique :

    Rather than do what previous studies had done — recruit participants who self-reported depression — the researchers identified data from people consenting to share Facebook statuses and electronic medical-record information, and then analyzed the statuses using machine-learning techniques to distinguish those with a formal depression diagnosis.

    Les marqueurs considérés sont aussi des marqueurs sociaux et économiques, qu’il faudrait traiter autrement qu’avec des médicaments.

    They learned that these markers comprised emotional, cognitive, and interpersonal processes such as hostility and loneliness, sadness and rumination, and that they could predict future depression as early as three months before first documentation of the illness in a medical record.

    La conclusion est fantastique : il faut rendre le balayage obligatoire !!!

    Eichstaedt sees long-term potential in using these data as a form of unobtrusive screening. “The hope is that one day, these screening systems can be integrated into systems of care,” he says. “This tool raises yellow flags; eventually the hope is that you could directly funnel people it identifies into scalable treatment modalities.”

    Despite some limitations to the study, including its strictly urban sample, and limitations in the field itself — not every depression diagnosis in a medical record meets the gold standard that structured clinical interviews provide, for example — the findings offer a potential new way to uncover and get help for those suffering from depression.

    #Dépression #Facebook #Foutaises #Hubris_scientifique #Big_pharma #Psychologie

  • Why the Cambridge Analytica Scandal Is a Watershed Moment for Social Media - Knowledge Wharton
    http://knowledge.wharton.upenn.edu/article/fallout-cambridge-analytica

    “We’re experiencing a watershed moment with regard to social media,” said Aral. “People are now beginning to realize that social media is not just either a fun plaything or a nuisance. It can have potentially real consequences in society.”

    The Cambridge Analytica scandal underscores how little consumers know about the potential uses of their data, according to Berman. He recalled a scene in the film Minority Report where Tom Cruise enters a mall and sees holograms of personally targeted ads. “Online advertising today has reached about the same level of sophistication, in terms of targeting, and also some level of prediction,” he said. “It’s not only that the advertiser can tell what you bought in the past, but also what you may be looking to buy.”

    Consumers are partially aware of that because they often see ads that show them products they have browsed, or websites they have visited, and these ads “chase them,” Berman said. “What consumers may be unaware of is how the advertiser determines what they’re looking to buy, and the Cambridge Analytica exposé shows a tiny part of this world.”

    A research paper that Nave recently co-authored captures the potential impact of the kind of work Cambridge Analytica did for the Trump campaign. “On the one hand, this form of psychological mass persuasion could be used to help people make better decisions and lead healthier and happier lives,” it stated. “On the other hand, it could be used to covertly exploit weaknesses in their character and persuade them to take action against their own best interest, highlighting the potential need for policy interventions.”

    Nave said the Cambridge Analytica scandal exposes exactly those types of risks, even as they existed before the internet era. “Propaganda is not a new invention, and neither is targeted messaging in marketing,” he said. “What this scandal demonstrates, however, is that our online behavior exposes a lot about our personality, fears and weaknesses – and that this information can be used for influencing our behavior.”

    In Golbeck’s research projects involving the use of algorithms, she found that people “are really shocked that we’re able to get these insights like what your personality traits are, what your political preferences are, how influenced you can be, and how much of that data we’re able to harvest.”

    Even more shocking, perhaps, is how easy it is to find the data. “Any app on Facebook can pull the kind of data that Cambridge Analytica did – they can [do so] for all of your data and the data of all your friends,” said Golbeck. “Even if you don’t install any apps, if your friends use apps, those apps can pull your data, and then once they have that [information] they can get these extremely deep, intimate insights using artificial intelligence, about how to influence you, how to change your behavior.” But she draws a line there: “It’s one thing if that’s to get you to buy a pair of shoes; it’s another thing if it’s to change the outcome of an election.”

    “Facebook has tried to play both sides of [the issue],” said Golbeck. She recalled a study by scientists from Facebook and the University of California, San Diego, that claimed social media networks could have “a measurable if limited influence on voter turnout,” as The New York Times reported. “On one hand, they claim that they can have a big influence; on the other hand they want to say ‘No, no, we haven’t had any impact on this.’ So they are going to have a really tough act to play here, to actually justify what they’re claiming on both sides.”

    Golbeck called for ways to codify how researchers could ethically go about their work using social media data, “and give people some of those rights in a broader space that they don’t have now.” Aral expected the solution to emerge in the form of “a middle ground where we learn to use these technologies ethically in order to enhance our society, our access to information, our ability to cooperate and coordinate with one another, and our ability to spread positive social change in the world.” At the same time, he advocated tightening use requirements for the data, and bringing back “the notion of informed consent and consent in a meaningful way, so that we can realize the promise of social media while avoiding the peril.”

    Historically, marketers could collect individual data, but with social platforms, they can now also collect data about a user’s social contacts, said Berman. “These social contacts never gave permission explicitly for this information to be collected,” he added. “Consumers need to realize that by following someone or connecting to someone on social media, they also expose themselves to marketers who target the followed individual.”

    In terms of safeguards, Berman said it is hard to know in advance what a company will do with the data it collects. “If they use it for normal advertising, say toothpaste, that may be legitimate, and if they use it for political advertising, as in elections, that may be illegitimate. But the data itself is the same data.”

    According to Berman, most consumers, for example, don’t know that loyalty cards are used to track their behavior and that the data is sold to marketers. Would they stop using these cards if they knew? “I am not sure,” he said. “Research shows that people in surveys say they want to maintain their privacy rights, but when asked how much they’re willing to give up in customer experience – or to pay for it – the result is not too much. In other words, there’s a difference between how we care about privacy as an idea, and how much we’re willing to give up to maintain it.”

    Golbeck said tools exist for users to limit the amount of data they let reside on social media platforms, including one called Facebook Timeline Cleaner, and a “tweet delete” feature on Twitter. _ “One way that you can make yourself less susceptible to some of this kind of targeting is to keep less data there, delete stuff more regularly, and treat it as an ephemeral platform, _ ” she said.

    Mais est-ce crédible ? Les médias sociaux sont aussi des formes d’archives personnelles.

    #Facebook #Cambridge_analytica

  • Extreme Digital Vetting of Visitors to the U.S. Moves… — ProPublica
    https://www.propublica.org/article/extreme-digital-vetting-of-visitors-to-the-u-s-moves-forward-under-a-new

    The Department of Immigration & Customs Enforcement is taking new steps in its plans for monitoring the social media accounts of applicants and holders of U.S. visas. At a tech industry conference last Thursday in Arlington, Virginia, ICE officials explained to software providers what they are seeking: algorithms that would assess potential threats posed by visa holders in the United States and conduct ongoing social media surveillance of those deemed high risk.

    Some analysts argue that gathering social media data is necessary. ICE already has a tool that searches for connections to terrorists, according to Claude Arnold, a former ICE Homeland Security Investigations special agent, now with the security firm Frontier Solutions. But, he said, potential terrorist threats often come from countries, such as Iraq or Syria, that provide little intelligence to U.S. authorities. As a result, in Arnold’s view, social media information is all the more important.

    Privacy advocates take a darker view. “ICE is building a dangerously broad tool that could be used to justify excluding, or deporting, almost anyone,” said Alvaro Bedoya, executive director of Georgetown Law’s Center on Privacy & Technology. “They are talking about this as a targeted tool, but the numbers tell a different story.”

    Bedoya noted that the program outline originally anticipated that the monitoring would identify 10,000 high-risk visa holders a year. That suggests the pool of people under social media surveillance would be many orders of magnitude larger. (ICE officials did not address this point at the conference.)

    Last week, a coalition of academics and technologists warned in a public letter that ICE’s interest in using big data algorithms to assess risk is misguided, given how rare it is for foreign visitors to be involved in terrorist attacks in the U.S. That means there’s little historical data to mine in hopes of using it to design a new algorithm. The letter cited a Cato Institute analysis that found that the likelihood of an American dying in a terrorist attack on U.S. soil in any given year was 1 in 3.6 million in the period between 1975 and 2015.

    Cathy O’Neil, one of the signatories to that letter and author of “Weapons of Math Destruction,” told this reporter in August that any algorithm a company proposes would come built-in with some very human calculations. “At the end of the day, someone has to choose a ratio,” she said. “How many innocent false positives are you going to keep out of the country for each false negative?”

    Social media surveillance would be difficult to carry out without collecting collateral data on thousands of American citizens in the process, said Rachel Levinson-Waldman, senior counsel to the Brennan Center’s Liberty and National Security Program.

    “Generally, with surveillance technologies, they are adopted for national security purposes overseas, but are then brought stateside pretty quickly,” she said, citing practices first honed overseas, such as intercepting cellphone calls. “So once there’s some kind of dragnet surveillance tool or information collection tool in place for one purpose, slippage can happen, and it will expand and expand.”

    #Surveillance #USA #Visas #Médias_sociaux

  • Why is Kim Kardashian hanging out with Émile Durkheim on Twitter?: Learning social media analysis as a sociologist – This Is Not a Sociology Blog
    https://christopherharpertill.wordpress.com/2016/10/22/why-is-kim-kardashian-hanging-out-with-emile-durkhe

    Are you interested in doing research with social media data? If so, you might be interested in a workshop I have organised (as a BSA Digital Sociology Group event) which will be a basic introduction to using the software programme NodeXL for social scientists. This will be at Leeds Beckett on 9th January 2017 and you can register through the BSA website (£15 BSA members, £20 non-members). The workshop will be led by the excellent Wasim Ahmed who is a PhD researcher in the Information School and a Research Associate at the Management School at The University of Sheffield and also a social media analysis consultant. The session will give you a grounding in using NodeXL to analyse Twitter (and potentially other networks) and suggest some ways it is of particular use to social scientists.

    #réseaux #complexité_visuelle #medias_sociaux

    • Twitter mappers often fail to normalize their data, meaning that many Twitter maps are less representations of deep, social phenomena and more depictions of population patterns. The Ferguson map, for example, doesn’t meaningfully diverge from “typical tweeting,” Shelton says.

      Et la conclusion,

      It’s 2015 now,” Poorthuis says. “It was cool and an engineering challenge to get these points on a map. But now it’s time to ask deeper and more meaningful questions.

      #big_data_bourrin vs #statistiques ;-)

  • Visualising #migration and climate change. What can web and social media data tell us about public interest in migration and climate change?

    Using data from Google News, we have found that while the total amount of reporting on climate change is decreasing, the proportion of those stories that mention migration is increasing. Read on to find out why.

    http://climatemigration.org.uk/visualising-migration-and-climate-change-what-can-web-and-socia
    #climat #changement_climatique #réfugiés_climatiques #statistiques #graphiques #chiffres