Forthcoming | Fennia - International Journal of Geography

/index

  • Numéro spécial de la revue #Fennia sur l’édition scientifique (très très bienvenu).

    Can research quality be measured quantitatively? (2017-11-07)
    Michael Richard Handley Jones

    In this article I reflect on ways in which the neoliberal university and its administrative counterpart, new public management (NPM), affect academic publishing activity. One characteristic feature of NPM is the urge to use simple numerical indicators of research output as a tool to allocate funding and, in practice if not in theory, as a means of assessing research quality. This ranges from the use of journal impact factors (IF) and ranking of journals to publication points to determine what types of work in publishing is counted as meritorious for funding allocation. I argue that it is a fallacy to attempt to assess quality of scholarship through quantitative measures of publication output. I base my arguments on my experiences of editing a Norwegian geographical journal over a period of 16 years, along with my experiences as a scholar working for many years within the Norwegian university system.

    PDF

    Reclaiming value from academic labor: commentary by the Editors of Human Geography (2017-11-07)
    John C. Finn Christopher Newport University Richard Peet Graduate School of Geography, Clark University Sharlene Mollett University of Toronto, Scarborough John Lauermann Medgar Evers College, City University of New York

    There have long been discussions about the need for an alternative publishing model for academic research. This has been made clear by the September 2017 scandal involving Third World Quarterly. The editor’s deeply problematic decision to publish an essay arguing in favor of colonialism was likely meant as click-bate to drive clicks and citations. But we should not lose sight of the fact that this latest scandal is only one recent manifestation of a long-simmering problem that has periodically commanded significant attention in the academic literature, blogs, email lists, conference sessions, and the popular press. As a direct result, over the last decade or more, new journals have been created that specifically endeavor to offer routes around corporate/capitalist academic publishing, and several existing journals have removed themselves from this profit-driven ecosystem. In this commentary, the editorial team of the journal Human Geography weighs in on what we see as the nature of the problem, what we are doing in response, what our successes have been, and what challenges remain.

    PDF

    Say ‘Yes!’ to peer review: Open Access publishing and the need for mutual aid in academia (2017-11-22)
    Simon Springer University of Victoria Myriam Houssay-Holzschuch Claudia Villegas Levi Gahman

    Scholars are increasingly declining to offer their services in the peer review process. There are myriad reasons for this refusal, most notably the ever-increasing pressure placed on academics to publish within the neoliberal university. Yet if you are publishing yourself then you necessarily expect someone else to review your work, which begs the question as to why this service is not being reciprocated. There is something to be said about withholding one’s labour when journals are under corporate control, but when it comes to Open Access journals such denial is effectively unacceptable. Make time for it, as others have made time for you. As editors of the independent, Open Access, non-corporate journal ACME: An International Journal for Critical Geographies, we reflect on the struggles facing our daily operations, where scholars declining to participate in peer review is the biggest obstacle we face. We argue that peer review should be considered as a form of mutual aid, which is rooted in an ethics of cooperation. The system only works if you say ‘Yes’!

    PDF

    Evaluating otherwise: hierarchies and opportunities in publishing practices (2017-11-30)
    Derek Ruez University of Tampere

    This short paper responds to the provocations set out in Kirsi Pauliina Kallio’s recent editorial on ‘Subtle radical moves in scientific publishing’ and emerges out of my participation in a Fennia-organized panel at the 2017 Nordic Geographers’ Meeting where participants reflected on the challenges and opportunities of creating a more equitable and pluralistic international publishing environment. Given the dominance of English language publishing in international academic work and the broader geopolitics of knowledge production through which some contexts, approaches, and modes of knowledge are regularly devalued, I suggest that—to the extent that publishing outlets are evaluated or ranked—they should be evaluated and ranked, in part, based on their contribution to a pluralistically international academy. This revaluation could help shape the informal assessments made by scholars in the context of hiring, funding, and other key decisions. It could also be integrated into more formal channels, such as within the deliberations of the boards who produce publication rankings in, for example, Finland’s Publication Forum. Such a tactic need not preclude other work to contest rankings hierarchies and audit cultures as they advance the neoliberalization of academic work, but it does 1) suggest the importance of paying attention to what and how scholars value when we evaluate publishing outlets and 2) point toward the potential of critical and creative engagement with the range of processes (i.e. indexing, accrediting, measuring, ranking etc.) that surround and subsist within academic publishing.

    PDF

    Socially just publishing: implications for geographers and their journals (2017-11-26)
    Simon Batterbury Lancaster University

    There have been a range of protests against the high journal subscription costs, and author processing charges (APCs) levied for publishing in the more prestigious and commercially run journals that are favoured by geographers. But open protests across the sector like the ‘Academic Spring’ of 2012, and challenges to commercial copyright agreements, have been fragmented and less than successful. I renew the argument for ‘socially just’ publishing in geography. For geographers this is not limited to choosing alternative publication venues. It also involves a considerable effort by senior faculty members that are assessing hiring and promotion cases, to read and assess scholarship independently of its place of publication, and to reward the efforts of colleagues that offer their work as a public good. Criteria other than the citation index and prestige of a journal need to be foregrounded. Geographers can also be publishers, and I offer my experience editing the free online Journal of Political Ecology.

    PDF

    English: lingua franca or disenfranchising? (2017-12-04)
    Sara Fregonese University of Birmingham, United Kingdom

    Conceiving academic publishing as a long-term process that often includes oral communication and knowledge exchange at academic conferences, this commentary offers a critical take on English as lingua franca. Contrarily to the historical use of lingua franca as a simplified system of transnational communication that facilitates the pragmatics of economic and cultural exchange, academic English is instead used vernacularly and becomes an excluding barrier. In the writing and peer review stages of publishing, the linguistic positionality of both authors and peer reviewers thus needs more reflection in order for academic English not to become once again part of a disenfranchising process.

    https://fennia.journal.fi/forthcoming/view/index

    #revue #édition_scientifique #publications_scientifiques #université #peer_review #anglais #langue #impact_factor #open_source #indicateurs

  • Can research quality be measured quantitatively?

    In this article I reflect on ways in which the neoliberal university and its administrative counterpart, #new_public_management (NPM), affect academic publishing activity. One characteristic feature of NPM is the urge to use simple numerical indicators of research output as a tool to allocate funding and, in practice if not in theory, as a means of assessing research quality. This ranges from the use of journal impact factors (IF) and ranking of journals to publication points to determine what types of work in publishing is counted as meritorious for funding allocation. I argue that it is a fallacy to attempt to assess quality of scholarship through quantitative measures of publication output. I base my arguments on my experiences of editing a Norwegian geographical journal over a period of 16 years, along with my experiences as a scholar working for many years within the Norwegian university system.

    https://fennia.journal.fi/forthcoming/article/66602/27160
    https://fennia.journal.fi/forthcoming/view/index
    #qualité #recherche #quantitativisme #université #édition_scientifique #publications_scientifiques #indicateurs #indicateurs_numériques #impact_factor #impact-factor #ranking

    • How global university rankings are changing higher education

      EARLIER this month Peking University played host to perhaps the grandest global gathering ever of the higher-education business. Senior figures from the world’s most famous universities—Harvard and Yale, Oxford and Cambridge among them—enjoyed or endured a two-hour opening ceremony followed by a packed programme of mandatory cultural events interspersed with speeches lauding “Xi Jinping thought”. The party was thrown to celebrate Peking University’s 120th birthday—and, less explicitly, China’s success in a race that started 20 years ago.

      In May 1998 Jiang Zemin, China’s president at the time, announced Project 985, named for the year and the month. Its purpose was to create world-class universities. Nian Cai Liu, a professor of polymeric materials science and engineering at Shanghai Jiao Tong University, got swept up in this initiative. “I asked myself many questions, including: what is the definition of and criteria for a world-class university? What are the positions of top Chinese universities?” Once he started benchmarking them against foreign ones, he found that “governments, universities and stakeholders from all around the world” were interested. So, in 2003, he produced the first ranking of 500 leading global institutions. Nobody, least of all the modest Professor Liu, expected the Shanghai rankings to be so popular. “Indeed, it was a real surprise.”

      People are suckers for league tables, be they of wealth, beauty, fame—or institutions of higher education. University rankings do not just feed humanity’s competitive urges. They are also an important source of consumer intelligence about a good on which people spend huge amounts of time and money, and about which precious little other information is available. Hence the existence of national league tables, such as US News & World Report’s ranking of American universities. But the creation of global league tables—there are now around 20, with Shanghai, the Times Higher Education (THE) and QS the most important—took the competition to a new level. It set not just universities, but governments, against each other.

      When the Shanghai rankings were first published, the “knowledge economy” was emerging into the global consciousness. Governments realised that great universities were no longer just sources of cultural pride and finishing schools for the children of the well-off, but the engines of future prosperity—generators of human capital, of ideas and of innovative companies.

      The rankings focused the minds of governments, particularly in countries that did badly. Every government needed a few higher-educational stars; any government that failed to create them had failed its people and lost an important global race. Europe’s poor performance was particularly galling for Germany, home of the modern research university. The government responded swiftly, announcing in 2005 an Exzellenzinitiative to channel money to institutions that might become world-class universities, and has so far spent over €4.6bn ($5.5bn) on it.

      Propelled by a combination of national pride and economic pragmatism, the idea spread swiftly that this was a global competition in which all self-respecting countries should take part. Thirty-one rich and middle-income countries have announced an excellence initiative of some sort. India, where world rankings were once regarded with post-colonial disdain, is the latest to join the race: in 2016 the finance minister announced that 20 institutions would aim to become world-class universities. The most generously funded initiatives are in France, China, Singapore, South Korea and Taiwan. The most unrealistic targets are Nigeria’s, to get at least two universities in the world’s top 200, and Russia’s, to get five in the world’s top 100, both by 2020.

      The competition to rise up the rankings has had several effects. Below the very highest rankings, still dominated by America and western Europe—America has three of the THE’s top five slots and Britain two this year—the balance of power is shifting (see chart). The rise of China is the most obvious manifestation. It has 45 universities in the Shanghai top 500 and is now the only country other than Britain or America to have two universities in the THE’s top 30. Japan is doing poorly: its highest-ranked institution, the University of Tokyo, comes in at 48 in the THE’s table. Elsewhere, Latin America and eastern Europe have lagged behind.

      The rankings race has also increased the emphasis on research. Highly cited papers provide an easily available measure of success, and, lacking any other reliable metric, that is what the league tables are based on. None of the rankings includes teaching quality, which is hard to measure and compare. Shanghai’s is purely about research; THE and QS incorporate other measures, such as “reputation”. But since the league tables themselves are one of its main determinants, reputation is not an obviously independent variable.

      Hard times

      The research boom is excellent news for humanity, which will eventually reap the benefits, and for scientific researchers. But the social sciences and humanities are not faring so well. They tend to be at a disadvantage in rankings because there are fewer soft-science or humanities journals, so hard-science papers get more citations. Shanghai makes no allowance for that, and Professor Liu admits that his ranking tends to reinforce the dominance of hard science. Phil Baty, who edits the THE’s rankings, says they do take the hard sciences’ higher citation rates into account, scoring papers by the standards of the relevant discipline.

      The hard sciences have benefited from the bounty flowing from the “excellence initiatives”. According to a study of these programmes by Jamil Salmi, author of “The Challenge of Establishing World-Class Universities”, all the programmes except Taiwan’s focused on research rather than teaching, and most of them favoured STEM subjects (science, technology, engineering and mathematics). This is no doubt one of the reasons why the numbers of scientific papers produced globally nearly doubled between 2003 and 2016.

      The rankings may be contributing to a deterioration in teaching. The quality of the research academics produce has little bearing on the quality of their teaching. Indeed, academics who are passionate about their research may be less inclined to spend their energies on students, and so there may be an inverse relationship. Since students suffer when teaching quality declines, they might be expected to push back against this. But Ellen Hazelkorn, author of “Rankings and the Reshaping of Higher Education”, argues that students “are buying prestige in the labour market”. This means “they want to go to the highest-status university possible”—and the league tables are the only available measure of status. So students, too, in effect encourage universities to spend their money on research rather than teaching.

      The result, says Simon Marginson, Oxford University’s incoming professor of higher education, is “the distribution of teaching further down the academic hierarchy”, which fosters the growth of an “academic precariat”. These PhD students and non-tenured academics do the teaching that the star professors, hired for their research abilities, shun as a chore. The British government is trying to press universities to improve teaching, by creating a “teaching-excellence framework”; but the rating is made up of a student-satisfaction survey, dropout rates and alumni earnings—interesting, but not really a measure of teaching quality. Nevertheless, says Professor Marginson, “everybody recognises this as a problem, and everybody is watching what Britain is doing.”

      A third concern is that competition for rankings encourages stratification within university systems, which in turn exacerbates social inequality. “Excellence initiatives” funnel money to top universities, whose students, even if admission is highly competitive, tend to be the children of the well-off. “Those at the top get more government resources and those at the bottom get least,” points out Ms Hazelkorn. That’s true even in Britain, which, despite not having an excellence initiative, favours top universities through the allocation of research money. According to a study of over 120 universities by Alison Wolf of King’s College London and Andrew Jenkins of University College London, the Russell Group, a self-selected elite of 24 universities, get nearly half of the funding for the entire sector, and increased their share from 44.7% in 2001-02 to 49.1% in 2013-14.

      The rankings race draws other complaints. Some universities have hired “rankings managers”, which critics argue is not a good use of resources. Saudi Arabian universities have been accused of giving highly cited academics lucrative part-time contracts and requiring them to use their Saudi affiliation when publishing.

      Intellectual citizens of nowhere

      Notwithstanding its downsides, the rankings race has encouraged a benign trend with far-reaching implications: internationalisation. The top level of academia, particularly in the sciences, is perhaps the world’s most international community, as Professor Marginson’s work shows. Whereas around 4% of first-degree students in the OECD study abroad, a quarter of PhD students do. Research is getting more global: 22% of science and engineering papers were internationally co-authored in 2016, up from 16% in 2003. The rankings, which give marks for international co-authorship, encourage this trend. That is one reason why Japan, whose universities are as insular as its culture, lags. As research grows—in 2000-14 the annual number of PhDs awarded rose by half in America, doubled in Britain and quintupled in China—so does the size and importance of this multinational network.

      Researchers work together across borders on borderless problems—from climate change to artificial intelligence. They gather at conferences, spend time in each other’s universities and spread knowledge and scholarship across the world. Forced to publish in English, they share at least one language. They befriend each other, marry each other and support each other, politically as well as intellectually. Last year, for instance, when Cambridge University Press blocked online access to hundreds of articles on sensitive subjects, including the Tiananmen Square massacre, at the request of the Chinese government, it faced international protests, and an American academic launched a petition which was signed by over 1,500 academics around the world. CUP backed down.

      The rankings race is thus marked by a happy irony. Driven in part by nationalistic urges, it has fostered the growth of a community that knows no borders. Critics are right that governments and universities obsess too much about rankings. Yet the world benefits from the growth of this productive, international body of scholars.


      https://www.economist.com/international/2018/05/19/how-global-university-rankings-are-changing-higher-education?frsc=dg%7Ce

      #Chine #classement_de_Shanghai #compétition #classement #ranking #QS #Times_Higher_Education #THE #excellence #Exzellenzinitiative #Allemagne #Inde #France #Singapour #Taïwan #Corée_du_Sud #Nigeria #Russie #USA #Etats-Unis #Angleterre #UK #recherche #publications #publications_scientifiques #enseignement #réputation #sciences_sociales #sciences_dures #précarité #précarisation #travail #inégalités #anglais #langue #internationalisation #globalisation #mondialisation

      La fin est très en phase avec le journal qui a publié cet article, hélas :

      Critics are right that governments and universities obsess too much about rankings. Yet the world benefits from the growth of this productive, international body of scholars.

      La première version de cet article a été apparemment corrigée :

      Correction (May 22nd, 2018): An earlier version of this piece suggested that non-English data and books are not included in the rankings. This is incorrect. The article has been amended to remove that assertion.

      –-> mais en fait, en réalité, il n’aurait pas dû l’être. Pour avoir expérimenté moi-même une fois le #H-index sur ma liste de publications, je peux vous dire qu’aucun article en d’autres langues que l’anglais avait été retenu dans l’index. Et même pas tous les articles en anglais que j’ai publiés...