• Un marqueur pour différencier les infections des#réinfections à la #Covid-19
    https://www.futura-sciences.com/sante/actualites/coronavirus-marqueur-differencier-infections-reinfections-covid-19-

    « Notre capacité à surveiller et à contrôler à la fois l’infection et la réinfection repose sur le développement de stratégies de dépistage simples et immunologiquement solides », notent les auteurs de l’étude. L’analyse de ces anticorps pourrait en être une.

    Source :
    Serological Markers of #SARS-CoV-2 Reinfection
    https://journals.asm.org/doi/epdf/10.1128/mbio.02141-21

  • Unreliable social science research gets more attention than solid studies

    In 2011, a striking psychology paper made a splash across social media, news, and academia: People used the internet as a form of “external” memory (https://science.sciencemag.org/content/333/6040/277), the study said, relying on it for information rather than recalling facts themselves. In 2018, a key finding from that paper failed to replicate when a team of psychologists put it and 20 other high-profile social science studies (https://www.sciencemag.org/news/2018/08/generous-approach-replication-confirms-many-high-profile-social-science-) to the test.

    But the original paper has been cited 1417 times—with more than 400 of those citations coming after the 2018 replication project. That’s far more, on average, than the papers from the project that did replicate. Now, a new study shores up the popularity of unreliable studies: Social science papers that failed to replicate racked up 153 more citations, on average, than papers that replicated successfully.

    This latest result is “pretty damning,” says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. “Citation counts have long been treated as a proxy for research quality,” he says, so the finding that less reliable research is cited more points to a “fundamental problem” with how such work is evaluated.

    University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.

    Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That’s a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.

    And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.

    A failed replication doesn’t necessarily mean the original finding was false, Serra-Garcia points out. Changes in methods and evolving habits among participants—like changing patterns of internet use—may explain why an old result might not hold up. But she adds that her findings point to a fundamental tension in research: Scientists want their work to be accurate, but they also want to publish results that are attention grabbing. It might be that peer reviewers lower their bar for evidence when the results are particularly surprising or exciting, she says, which could mean striking results and weaker evidence often go hand in hand.

    The guideline that “extraordinary claims require extraordinary evidence” seems to soften when it comes to publication decisions, agrees Massey University computational biologist Thomas Pfeiffer, who studies replication issues, but was not involved with this work. That points to the need for extra safeguards to bolster the credibility of published work, he says—like a higher threshold for what counts as good evidence, and more effort to focus on strong research questions and methods, rather than flashy findings.

    “The finding is catnip for [research] culture change advocates like me,” says Brian Nosek, a psychologist at the University of Virginia who has spearheaded a number of replication efforts and was a co-author on two of the three replication projects that Serra-Garcia and Gneezy drew from. But before taking it too seriously, it’s worth seeing whether this finding itself can be replicated using different samples of papers, he says.

    The result falls in line with previous studies that suggest popular research is less reliable. A 2011 study in Infection and Immunity, for example, found that high-impact journals have higher retraction rates than lower impact ones (https://journals.asm.org/doi/full/10.1128/IAI.05661-11). And Dougherty’s research—currently an unreviewed preprint—has found that more highly cited papers were based on weaker data, he says. But a 2020 paper in the Proceedings of the National Academy of Sciences that looked at a different sample of papers found no relationship between citation and replication. That suggests the sample of papers could really matter, Pfeiffer says—for instance, the effect could be particularly strong in high-impact journals.

    Nosek adds that stronger but less sensational papers may still accrue more citations over the long haul, if the popularity contest of striking results burns out: “We’ve all seen enough teen movies to know that the popular kid loses in the end to the brainy geek. Maybe scientific findings operate in the same way: Credible ones don’t get noticed as much, but they do persist and win in the end.”

    https://www.sciencemag.org/news/2021/05/unreliable-social-science-research-gets-more-attention-solid-studies
    #recherche #sciences_sociales #citations #qualité #impact #science #popularité #rétraction

  • Research ethics: a profile of retractions from world class universities

    This study aims to profile the scientific retractions published in journals indexed in the Web of Science database from 2010 to 2019, from researchers at the top 20 World Class Universities according to the Times Higher Education global ranking of 2020. Descriptive statistics, Pearson’s correlation coefficient, and simple linear regression were used to analyze the data. Of the 330 analyzed retractions, #Harvard_University had the highest number of retractions and the main reason for retraction was data results. We conclude that the universities with a higher ranking tend to have a lower rate of retraction.

    https://link.springer.com/article/10.1007/s11192-021-03987-y

    #rétraction #invalidation #articles #édition_scientifique #publications #recherche #université #science #ranking #rétractions_scientifiques #articles_scientifiques #universités_classées #statistiques #chiffres #Harvard #honnêteté #excellence #classement

    ping @_kg_

    • Retracted Science and the Retraction Index

      Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.

      https://journals.asm.org/doi/full/10.1128/IAI.05661-11

    • Knowledge, Normativity and Power in Academia
      Critical Interventions

      Despite its capacity to produce knowledge that can directly influence policy and affect social change, academia is still often viewed as a stereotypical ivory tower, detached from the tumult of daily life. Knowledge, Normativity, and Power in Academia argues that, in our current moment of historic global unrest, the fruits of the academy need to be examined more closely than ever. This collection pinpoints the connections among researchers, activists, and artists, arguing that—despite what we might think—the knowledge produced in universities and the processes that ignite social transformation are inextricably intertwined. Knowledge, Normativity, and Power in Academia provides analysis from both inside and outside the academy to show how this seemingly staid locale can still provide space for critique and resistance.

      https://press.uchicago.edu/ucp/books/book/distributed/K/bo33910160.html

      ...written by Cluster of Excellence employees on Academic Excellence —> Based on: Conference “The Power of/in Academia: Critical Interventions in Knowledge Production and Society”, Cluster of Excellence, The Formation of Normative Orders, Goethe University Frankfurt