• Unreliable social science research gets more attention than solid studies

    In 2011, a striking psychology paper made a splash across social media, news, and academia: People used the internet as a form of “external” memory (https://science.sciencemag.org/content/333/6040/277), the study said, relying on it for information rather than recalling facts themselves. In 2018, a key finding from that paper failed to replicate when a team of psychologists put it and 20 other high-profile social science studies (https://www.sciencemag.org/news/2018/08/generous-approach-replication-confirms-many-high-profile-social-science-) to the test.

    But the original paper has been cited 1417 times—with more than 400 of those citations coming after the 2018 replication project. That’s far more, on average, than the papers from the project that did replicate. Now, a new study shores up the popularity of unreliable studies: Social science papers that failed to replicate racked up 153 more citations, on average, than papers that replicated successfully.

    This latest result is “pretty damning,” says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. “Citation counts have long been treated as a proxy for research quality,” he says, so the finding that less reliable research is cited more points to a “fundamental problem” with how such work is evaluated.

    University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.

    Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That’s a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.

    And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.

    A failed replication doesn’t necessarily mean the original finding was false, Serra-Garcia points out. Changes in methods and evolving habits among participants—like changing patterns of internet use—may explain why an old result might not hold up. But she adds that her findings point to a fundamental tension in research: Scientists want their work to be accurate, but they also want to publish results that are attention grabbing. It might be that peer reviewers lower their bar for evidence when the results are particularly surprising or exciting, she says, which could mean striking results and weaker evidence often go hand in hand.

    The guideline that “extraordinary claims require extraordinary evidence” seems to soften when it comes to publication decisions, agrees Massey University computational biologist Thomas Pfeiffer, who studies replication issues, but was not involved with this work. That points to the need for extra safeguards to bolster the credibility of published work, he says—like a higher threshold for what counts as good evidence, and more effort to focus on strong research questions and methods, rather than flashy findings.

    “The finding is catnip for [research] culture change advocates like me,” says Brian Nosek, a psychologist at the University of Virginia who has spearheaded a number of replication efforts and was a co-author on two of the three replication projects that Serra-Garcia and Gneezy drew from. But before taking it too seriously, it’s worth seeing whether this finding itself can be replicated using different samples of papers, he says.

    The result falls in line with previous studies that suggest popular research is less reliable. A 2011 study in Infection and Immunity, for example, found that high-impact journals have higher retraction rates than lower impact ones (https://journals.asm.org/doi/full/10.1128/IAI.05661-11). And Dougherty’s research—currently an unreviewed preprint—has found that more highly cited papers were based on weaker data, he says. But a 2020 paper in the Proceedings of the National Academy of Sciences that looked at a different sample of papers found no relationship between citation and replication. That suggests the sample of papers could really matter, Pfeiffer says—for instance, the effect could be particularly strong in high-impact journals.

    Nosek adds that stronger but less sensational papers may still accrue more citations over the long haul, if the popularity contest of striking results burns out: “We’ve all seen enough teen movies to know that the popular kid loses in the end to the brainy geek. Maybe scientific findings operate in the same way: Credible ones don’t get noticed as much, but they do persist and win in the end.”

    https://www.sciencemag.org/news/2021/05/unreliable-social-science-research-gets-more-attention-solid-studies
    #recherche #sciences_sociales #citations #qualité #impact #science #popularité #rétraction

  • Research ethics: a profile of retractions from world class universities

    This study aims to profile the scientific retractions published in journals indexed in the Web of Science database from 2010 to 2019, from researchers at the top 20 World Class Universities according to the Times Higher Education global ranking of 2020. Descriptive statistics, Pearson’s correlation coefficient, and simple linear regression were used to analyze the data. Of the 330 analyzed retractions, #Harvard_University had the highest number of retractions and the main reason for retraction was data results. We conclude that the universities with a higher ranking tend to have a lower rate of retraction.

    https://link.springer.com/article/10.1007/s11192-021-03987-y

    #rétraction #invalidation #articles #édition_scientifique #publications #recherche #université #science #ranking #rétractions_scientifiques #articles_scientifiques #universités_classées #statistiques #chiffres #Harvard #honnêteté #excellence #classement

    ping @_kg_

    • Retracted Science and the Retraction Index

      Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.

      https://journals.asm.org/doi/full/10.1128/IAI.05661-11

    • Knowledge, Normativity and Power in Academia
      Critical Interventions

      Despite its capacity to produce knowledge that can directly influence policy and affect social change, academia is still often viewed as a stereotypical ivory tower, detached from the tumult of daily life. Knowledge, Normativity, and Power in Academia argues that, in our current moment of historic global unrest, the fruits of the academy need to be examined more closely than ever. This collection pinpoints the connections among researchers, activists, and artists, arguing that—despite what we might think—the knowledge produced in universities and the processes that ignite social transformation are inextricably intertwined. Knowledge, Normativity, and Power in Academia provides analysis from both inside and outside the academy to show how this seemingly staid locale can still provide space for critique and resistance.

      https://press.uchicago.edu/ucp/books/book/distributed/K/bo33910160.html

      ...written by Cluster of Excellence employees on Academic Excellence —> Based on: Conference “The Power of/in Academia: Critical Interventions in Knowledge Production and Society”, Cluster of Excellence, The Formation of Normative Orders, Goethe University Frankfurt

    • Rank hypocrisy – how universities betray their promises on responsible research assessment

      It is time for universities to stop the nonsense of participating in flawed university rankings exercises, argue Paul Ashwin and Derek Heim

      Scientific integrity and ethical conduct are prerequisites for ensuring society’s faith in institutions entrusted with the pursuit of knowledge. As trust in science and scientists is under scrutiny, it is imperative that universities work together to strengthen trust in higher education.

      It is therefore welcome that, across the globe, universities are collectively taking steps to stamp out questionable practices that undermine their trustworthiness. For example, the sector is making rapid progress in developing better ways of assessing the quality of research. These changes were sparked by a long-established body of evidence about the significant flaws in metrics such as journal impact factors. Now over 24,000 individuals and organisations from 166 countries are signatories of the Declaration on Research Assessment (DORA), in explicit recognition of the pernicious impacts of the irresponsible use of research metrics.

      Even so, universities continue to be complicit in the pervasive and reckless use of much more questionable metrics in the form of commercial university rankings. These increasingly shape not only how universities market themselves but also how they operate: some institutions appear to spend more time thinking about how best to “game” rankings than about improving how they fulfil their core functions. Many use institutional and subject rankings as key performance indicators and exhort departments and academics to be more “competitive”.
      Unnecessary evil

      Commercial university rankings are often positioned as a necessary evil in the life of universities. This is despite a substantial body of international literature demonstrating unequivocally their flawed nature, which is as least as strong as the evidence undermining journal impact factors. Most institutional leaders react with an embarrassed shrug; after all they must play the hand they are dealt.

      Under the explanation that rankings are not going to go away, and often pushed hard by lay governors ignorant of the meaninglessness of rankings as measures of institutional quality, they do their best to maximise their institution’s performance. They even dedicate senior posts solely to this purpose. They then cover their websites and their buildings in loud proclamations about their “world leading” performance in these rankings. This is all at the expense of the long-term health of the sector and higher education’s reputation for scientific integrity.

      There is something soul destroying about institutions, whose role is dedicated to the pursuit and sharing of knowledge, appearing to take seriously measures that involve combining incomparable measures into aggregated scores and the use of rank ordering, which disproportionately exaggerates very small differences in the scores of institutions.

      Very few, if any, of the measures used are valid or reliable indicators of the quality of education or research but instead simply mirror the wealth and prestige of universities. Even worse, a primary purpose served by these rankings is – perversely – for those who produce them to sell advertising and consultancy services to the universities they are ranking.

      Despite their misleading nature being widely known and understood, the performance of universities in these rankings is still used to recruit students, and governments around the world use them to determine funding for students and initiatives. All are being deceived. Any form of university education that claims its quality is demonstrated through commercial university rankings has been mis-sold.

      There are signs of change. The University of Utrecht in the Netherlands has recently announced it will no longer provide data for commercial rankings, following the example of others, including Rhodes University in South Africa which has refused to do so for many years. Universities who have signed up to More than Our Rank also emphasise other ways of measuring their quality, although in this case, there is more than a slight sense that these universities want to exploit their ranking whilst keeping their integrity. This is simply not possible.
      Cognitive dissonance

      It is time for this nonsense to end. We are currently in the crazy position where, as part of their DORA commitments, ancient universities make strong promises not to use any metric without being explicit about its limitations on one part of their website, while on another, they unreservedly boast about their performance in commercial rankings to prospective students. This rank hypocrisy must stop if universities are not to undermine their position as institutions dedicated to the pursuit and sharing of trustworthy knowledge in society.

      This may feel like a forlorn hope given the severe financial pressures that so many universities are under. However, these pressures make it even more timely for universities to stop dedicating resources to rankings whether this is through providing data to commercial rankings or paying for the “services” of commercial ranking companies, and committing institutional effort, to promote their position in rankings.

      It is important to remember that DORA developed into a global phenomenon from an annual meeting of the American Society for Cell Biology. With the institutions who have withdrawn from commercial rankings and the organisations already signed up to More than Our Rank, there are the makings of a significant movement against commercial rankings. However, this movement needs to be focused on promoting “quality Not rankings”, making it clear that the latter provides no meaningful measure of the former.

      To strengthen this growing movement academics need to stop completing hollow reputation surveys. University leadership teams and governing bodies need to urgently reflect on the grave harm that continuing to play the zero-sum rankings game is doing – both to themselves and the long-term credibility of the sector.

      Once the spell of commercial rankings is broken, we will wonder why universities ever participated so greedily in this deceitful practice that misleads prospective students, funding bodies, governments, and employers. Higher education institutions face enough challenges from an increasingly sceptical society without engaging in divisive and meaningless competition, which undermines their integrity and trustworthiness, and is solely for the benefit of those who produce commercial university rankings.

      https://wonkhe.com/blogs/rank-hypocrisy-how-universities-betray-their-promises-on-responsible-researc

  • #Science has an ugly, complicated dark side and the #Coronavirus is bringing it out.

    #Retractions aside, the situation raises broad concerns about the rigor of published research itself. “What [the pandemic] has done is just made everyone rush to publication and rush to judgment, frankly,” says Oransky, a non-practicing medical doctor who is also the vice president of editorial at medical news and reference site Medscape and teaches medical journalism at New York University. “You’re seeing papers published in the world’s leading medical journals that probably shouldn’t have even been accepted in the world’s worst medical journals.”

    Earlier this month, for instance, the New England Journal of Medicine published a 61-person study for an antiviral therapy called remdesivir to treat COVID-19. Of the 53 patients whose data could be analyzed, 36 saw improvement after 10 days of treatment. There was no control group, and Gilead Sciences, the company that developed remdesivir, funded and conducted the study and helped write the first draft of the manuscript. The study was by no means fraudulent, but it presented a clear conflict of interest. “I think it’s a good thing that it’s all out there and people are able to look at it,” Oransky says, “but it’s so inconsistent with what the New England Journal of Medicine claims that it’s always about.”

    https://www.motherjones.com/politics/2020/04/coronavirus-science-rush-to-publish-retractions

  • Pro-nuclear countries making slower progress on climate targets
    http://www.sussex.ac.uk/broadcast/read/36547

    A new study of European countries, published in the journal Climate Policy, shows that the most progress towards reducing carbon emissions and increasing renewable energy sources – as set out in the EU’s 2020 Strategy – has been made by nations without nuclear energy or with plans to reduce it.

    Conversely, pro-nuclear countries have been slower to implement wind, solar and hydropower technologies and to tackle emissions.

    While it’s difficult to show a causal link, the researchers say the study casts significant doubts on nuclear energy as the answer to combating climate change.

    #nucléaire #climat #énergies_renouvelables

  • Une petite anecdote rigolote : les spécialistes de la veille sur la rétraction d’articles...rétractent un de leurs articles.
    Ils déclarent s’être trompés en conseillant, en cas de fraude, de contacter les auteurs ou les éditeurs ; il vaut mieux contacter l’employeur ou le rendre publique grâce au site pubpeer...
    A Retraction Watch retraction : Our 2013 advice on reporting misconduct turns out to have been wrong - Retraction Watch at Retraction Watch
    http://retractionwatch.com/2015/11/30/a-retraction-watch-retraction-our-2013-advice-on-reporting-miscondu
    #fraude #retraction #recherche

  • Le site d’info retraction watch qui recense (et enquête aussi sur) les cas de rétractations d’articles scientifiques a besoin de sous. #retraction #recherche
    Dear Retraction Watch readers : We want to grow. Here’s how you can help | Retraction Watch
    http://retractionwatch.com/2014/03/17/dear-retraction-watch-readers-we-want-to-grow-heres-how-you-can-hel

    How will we use the money?

    Operating expenses, such as hosting charges and phone bills
    Hiring other writers as contributors
    Conducting more in-depth investigations, for which we may have to travel
    Building a proper retraction database