#Robert_Maxwell #Reed-Elsevier #Elsevier #multinationales #business #Pergamon
With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.
In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.
The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.
A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.
Many scientists also believe that the publishing industry exerts too much influence over what scientists choose to study, which is ultimately bad for science itself. Journals prize new and spectacular results – after all, they are in the business of selling subscriptions – and scientists, knowing exactly what kind of work gets published, align their submissions accordingly. This produces a steady stream of papers, the importance of which is immediately apparent. But it also means that scientists do not have an accurate map of their field of inquiry. Researchers may end up inadvertently exploring dead ends that their fellow scientists have already run up against, solely because the information about previous failures has never been given space in the pages of the relevant scientific publications
It is hard to believe that what is essentially a for-profit oligopoly functioning within an otherwise heavily regulated, government-funded enterprise can avoid extinction in the long run. But publishing has been deeply enmeshed in the science profession for decades. Today, every scientist knows that their career depends on being published, and professional success is especially determined by getting work into the most prestigious journals. The long, slow, nearly directionless work pursued by some of the most influential scientists of the 20th century is no longer a viable career option. Under today’s system, the father of genetic sequencing, Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.
Improbable as it might sound, few people in the last century have done more to shape the way science is conducted today than Maxwell.
Scientific articles are about unique discoveries: one article cannot substitute for another. If a serious new journal appeared, scientists would simply request that their university library subscribe to that one as well. If Maxwell was creating three times as many journals as his competition, he would make three times more money.
“At the start of my career, nobody took much notice of where you published, and then everything changed in 1974 with Cell,” Randy Schekman, the Berkeley molecular biologist and Nobel prize winner, told me. #Cell (now owned by Elsevier) was a journal started by Massachusetts Institute of Technology (MIT) to showcase the newly ascendant field of molecular biology. It was edited by a young biologist named #Ben_Lewin, who approached his work with an intense, almost literary bent. Lewin prized long, rigorous papers that answered big questions – often representing years of research that would have yielded multiple papers in other venues – and, breaking with the idea that journals were passive instruments to communicate science, he rejected far more papers than he published.
Suddenly, where you published became immensely important. Other editors took a similarly activist approach in the hopes of replicating Cell’s success. Publishers also adopted a metric called “#impact_factor,” invented in the 1960s by #Eugene_Garfield, a librarian and linguist, as a rough calculation of how often papers in a given journal are cited in other papers. For publishers, it became a way to rank and advertise the scientific reach of their products. The new-look journals, with their emphasis on big results, shot to the top of these new rankings, and scientists who published in “high-impact” journals were rewarded with jobs and funding. Almost overnight, a new currency of prestige had been created in the scientific world. (Garfield later referred to his creation as “like nuclear energy … a mixed blessing”.)
And so science became a strange co-production between scientists and journal editors, with the former increasingly pursuing discoveries that would impress the latter. These days, given a choice of projects, a scientist will almost always reject both the prosaic work of confirming or disproving past studies, and the decades-long pursuit of a risky “moonshot”, in favour of a middle ground: a topic that is popular with editors and likely to yield regular publications. “Academics are incentivised to produce research that caters to these demands,” said the biologist and Nobel laureate Sydney Brenner in a 2014 interview, calling the system “corrupt.”
As Maxwell had predicted, competition didn’t drive down prices. Between 1975 and 1985, the average price of a journal doubled. The New York Times reported that in 1984 it cost $2,500 to subscribe to the journal Brain Research; in 1988, it cost more than $5,000. That same year, Harvard Library overran its research journal budget by half a million dollars.
Scientists occasionally questioned the fairness of this hugely profitable business to which they supplied their work for free, but it was university librarians who first realised the trap in the market Maxwell had created. The librarians used university funds to buy journals on behalf of scientists. Maxwell was well aware of this. “Scientists are not as price-conscious as other professionals, mainly because they are not spending their own money,” he told his publication Global Business in a 1988 interview. And since there was no way to swap one journal for another, cheaper one, the result was, Maxwell continued, “a perpetual financing machine”. Librarians were locked into a series of thousands of tiny monopolies. There were now more than a million scientific articles being published a year, and they had to buy all of them at whatever price the publishers wanted.
With the purchase of Pergamon’s 400-strong catalogue, Elsevier now controlled more than 1,000 scientific journals, making it by far the largest scientific publisher in the world.
At the time of the merger, Charkin, the former Macmillan CEO, recalls advising Pierre Vinken, the CEO of Elsevier, that Pergamon was a mature business, and that Elsevier had overpaid for it. But Vinken had no doubts, Charkin recalled: “He said, ‘You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. Then you market the thing and your salespeople go out there to sell subscriptions, which is slow and tough, and you try to make the journal as good as possible. That’s what happened at Pergamon. And then we buy it and we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.’ He was right and I was wrong.”
By 1994, three years after acquiring Pergamon, Elsevier had raised its prices by 50%. Universities complained that their budgets were stretched to breaking point – the US-based Publishers Weekly reported librarians referring to a “doomsday machine” in their industry – and, for the first time, they began cancelling subscriptions to less popular journals.
In 1998, Elsevier rolled out its plan for the internet age, which would come to be called “The Big Deal”. It offered electronic access to bundles of hundreds of journals at a time: a university would pay a set fee each year – according to a report based on freedom of information requests, Cornell University’s 2009 tab was just short of $2m – and any student or professor could download any journal they wanted through Elsevier’s website. Universities signed up en masse.
Those predicting Elsevier’s downfall had assumed scientists experimenting with sharing their work for free online could slowly outcompete Elsevier’s titles by replacing them one at a time. In response, Elsevier created a switch that fused Maxwell’s thousands of tiny monopolies into one so large that, like a basic resource – say water, or power – it was impossible for universities to do without. Pay, and the scientific lights stayed on, but refuse, and up to a quarter of the scientific literature would go dark at any one institution. It concentrated immense power in the hands of the largest publishers, and Elsevier’s profits began another steep rise that would lead them into the billions by the 2010s. In 2015, a Financial Times article anointed Elsevier “the business the internet could not kill”.
Publishers are now wound so tightly around the various organs of the scientific body that no single effort has been able to dislodge them. In a 2015 report, an information scientist from the University of Montreal, Vincent Larivière, showed that Elsevier owned 24% of the scientific journal market, while Maxwell’s old partners Springer, and his crosstown rivals Wiley-Blackwell, controlled about another 12% each. These three companies accounted for half the market. (An Elsevier representative familiar with the report told me that by their own estimate they publish only 16% of the scientific literature.)
Elsevier says its primary goal is to facilitate the work of scientists and other researchers. An Elsevier rep noted that the company received 1.5m article submissions last year, and published 420,000; 14 million scientists entrust Elsevier to publish their results, and 800,000 scientists donate their time to help them with editing and peer-review.
In a sense, it is not any one publisher’s fault that the scientific world seems to bend to the industry’s gravitational pull. When governments including those of China and Mexico offer financial bonuses for publishing in high-impact journals, they are not responding to a demand by any specific publisher, but following the rewards of an enormously complex system that has to accommodate the utopian ideals of science with the commercial goals of the publishers that dominate it. (“We scientists have not given a lot of thought to the water we’re swimming in,” Neal Young told me.)
Since the early 2000s, scientists have championed an alternative to subscription publishing called “open access”. This solves the difficulty of balancing scientific and commercial imperatives by simply removing the commercial element. In practice, this usually takes the form of online journals, to which scientists pay an upfront free to cover editing costs, which then ensure the work is available free to access for anyone in perpetuity. But despite the backing of some of the biggest funding agencies in the world, including the Gates Foundation and the Wellcome Trust, only about a quarter of scientific papers are made freely available at the time of their publication.
The idea that scientific research should be freely available for anyone to use is a sharp departure, even a threat, to the current system – which relies on publishers’ ability to restrict access to the scientific literature in order to maintain its immense profitability. In recent years, the most radical opposition to the status quo has coalesced around a controversial website called Sci-Hub – a sort of Napster for science that allows anyone to download scientific papers for free. Its creator, Alexandra Elbakyan, a Kazhakstani, is in hiding, facing charges of hacking and copyright infringement in the US. Elsevier recently obtained a $15m injunction (the maximum allowable amount) against her.
Elbakyan is an unabashed utopian. “Science should belong to scientists and not the publishers,” she told me in an email. In a letter to the court, she cited Article 27 of the UN’s Universal Declaration of Human Rights, asserting the right “to share in scientific advancement and its benefits”.
Whatever the fate of Sci-Hub, it seems that frustration with the current system is growing. But history shows that betting against science publishers is a risky move. After all, back in 1988, Maxwell predicted that in the future there would only be a handful of immensely powerful publishing companies left, and that they would ply their trade in an electronic age with no printing costs, leading to almost “pure profit”. That sounds a lot like the world we live in now.
#Butterworths #Springer #Paul_Rosbaud #histoire #Genève #Pergamon #Oxford_United #Derby_County_FC #monopole #open_access #Sci-Hub #Alexandra_Elbakyan