• Meet Olga Aleksandrovna Ladyzhenskaya : the Russian mathematician who pushed through the Iron Curtain

    In spite of personal tragedy, dire political circumstances and deteriorating health, her passion for mathematics burned bright

  • Orange’s Sea Cable Repair Fleet Looks Beyond Investment Boom

    The Pierre de Fermat ship
    Source : Orange SA

    • Phone carrier assessing opportunities in offshore wind sector
    • France sees strategic interest in marine cable expertise

    For decades, ships owned by French phone carrier Orange SA have traveled the world’s oceans, installing and fixing the undersea cables that carry internet traffic from one continent to another.

    The fleet of six run by Orange Marine is now looking to diversify, even with the biggest investment boom for the infrastructure since the 1990s. Instead of creating more business, the new high-capacity lines being financed by the tech giants are expected to put older cables out of service, meaning less work for the seaborne repairmen.

    One cable that started up last year highlights the issue. The line, running from the U.S. state of Virginia to Sopelana, Spain, accounts for half the capacity of the dozen or so trans-Atlantic cables. Known as Marea, the 6,600-kilometer (4,101-mile) link owned by Facebook Inc., Microsoft Corp. and Telefonica SA’s Telxius offers the fastest data transmission speeds in the world.

    Jean-Luc Vuillemin, who oversees Orange Marine, sees potential opportunities in servicing offshore wind turbines, he said in an interview on the Pierre de Fermat, a 100-meter ship named after the 17th-century mathematician and docked at the Brest port in northwest France.

    The ecosystem is pretty favorable right now but this may change in the future,” Vuillemin said. “You need to diversify when the business is in order, so we’re thinking about the next steps.

    Orange Marine is a small yet profitable business for France’s dominant phone carrier, generating about 100 million euros ($112 million) of annual sales out of Orange’s roughly 41 billion euros of revenue. But it’s considered a strategic asset by the company, whose largest shareholder is the French state.

    Being able to quickly repair cables can be crucial in an emergency, as Algeria experienced in 2015 when a link between Annaba in the country’s northeast and Marseille in southern France was cut by an anchor, disrupting internet service in the North African nation for almost a week.

    Together, Orange Marine and its France-based competitor at Nokia Oyj, Alcatel Submarine Networks, own about one-quarter of the 40 or so ships focused on subsea cables globally, Vuillemin said.

    Our Western economies are increasingly dependent on these subsea cables. Orange Marine provides strategic autonomy. It’s a matter of sovereignty,” he said.

  • The future of software development: modular, intelligent, and rickety

    The future of software development: freelance, AI-assisted, and ricketyNearly four thousand years elapsed between when Egyptian astronomers invented the concept of zero and a British mathematician tacked together the first computer. But once the thing was made, we were off to the races. It was only 130 more years to electronic computers, 40 to the internet, and only nine to smartphones. Now advances in computer science pop off as if discharged from a ticker tape machine.But not everything that earns press sees success — or has an impact. Most inventions die following their hype cycle, in what research firm Gartner calls the trough of disillusionment. In this article, I’ll share three signals amidst all the noise that I think indicate trends that will survive to become the biggest forces in (...)

    #hackernoon-top-story #open-source #future-of-work #freelancing #software-development

  • Mathematicians Discover the Perfect Way to Multiply | Quanta Magazine

    Four thousand years ago, the Babylonians invented multiplication. Last month, mathematicians perfected it.

    On March 18, two researchers described the fastest method ever discovered for multiplying two very large numbers. The paper marks the culmination of a long-running search to find the most efficient procedure for performing one of the most basic operations in math.

    “Everybody thinks basically that the method you learn in school is the best one, but in fact it’s an active area of research,” said Joris van der Hoeven, a mathematician at the French National Center for Scientific Research and one of the co-authors.

    The complexity of many computational problems, from calculating new digits of pi to finding large prime numbers, boils down to the speed of multiplication. Van der Hoeven describes their result as setting a kind of mathematical speed limit for how fast many other kinds of problems can be solved.

    “In physics you have important constants like the speed of light which allow you to describe all kinds of phenomena,” van der Hoeven said. “If you want to know how fast computers can solve certain mathematical problems, then integer multiplication pops up as some kind of basic building brick with respect to which you can express those kinds of speeds.”

    Most everyone learns to multiply the same way. We stack two numbers, multiply every digit in the bottom number by every digit in the top number, and do addition at the end. If you’re multiplying two two-digit numbers, you end up performing four smaller multiplications to produce a final product.

    The grade school or “carrying” method requires about n2 steps, where n is the number of digits of each of the numbers you’re multiplying. So three-digit numbers require nine multiplications, while 100-digit numbers require 10,000 multiplications.

    The carrying method works well for numbers with just a few digits, but it bogs down when we’re multiplying numbers with millions or billions of digits (which is what computers do to accurately calculate pi or as part of the worldwide search for large primes). To multiply two numbers with 1 billion digits requires 1 billion squared, or 1018, multiplications, which would take a modern computer roughly 30 years.

    For millennia it was widely assumed that there was no faster way to multiply. Then in 1960, the 23-year-old Russian mathematician Anatoly Karatsuba took a seminar led by Andrey Kolmogorov, one of the great mathematicians of the 20th century. Kolmogorov asserted that there was no general procedure for doing multiplication that required fewer than n2 steps. Karatsuba thought there was — and after a week of searching, he found it.

    Karatsuba’s method involves breaking up the digits of a number and recombining them in a novel way that allows you to substitute a small number of additions and subtractions for a large number of multiplications. The method saves time because addition takes only 2n steps, as opposed to n2 steps.

    “With addition, you do it a year earlier in school because it’s much easier, you can do it in linear time, almost as fast as reading the numbers from right to left,” said Martin Fürer, a mathematician at Pennsylvania State University who in 2007 created what was at the time the fastest multiplication algorithm.

    When dealing with large numbers, you can repeat the Karatsuba procedure, splitting the original number into almost as many parts as it has digits. And with each splitting, you replace multiplications that require many steps to compute with additions and subtractions that require far fewer.

    “You can turn some of the multiplications into additions, and the idea is additions will be faster for computers,” said David Harvey, a mathematician at the University of New South Wales and a co-author on the new paper.

    Karatsuba’s method made it possible to multiply numbers using only n1.58 single-digit multiplications. Then in 1971 Arnold Schönhage and Volker Strassen published a method capable of multiplying large numbers in n × log n × log(log n) multiplicative steps, where log n is the logarithm of n. For two 1-billion-digit numbers, Karatsuba’s method would require about 165 trillion additional steps.

    Schönhage and Strassen’s method, which is how computers multiply huge numbers, had two other important long-term consequences. First, it introduced the use of a technique from the field of signal processing called a fast Fourier transform. The technique has been the basis for every fast multiplication algorithm since.

    Second, in that same paper Schönhage and Strassen conjectured that there should be an even faster algorithm than the one they found — a method that needs only n × log n single-digit operations — and that such an algorithm would be the fastest possible. Their conjecture was based on a hunch that an operation as fundamental as multiplication must have a limit more elegant than n × log n × log(log n).

    “It was kind of a general consensus that multiplication is such an important basic operation that, just from an aesthetic point of view, such an important operation requires a nice complexity bound,” Fürer said. “From general experience the mathematics of basic things at the end always turns out to be elegant.”

    Schönhage and Strassen’s ungainly n × log n × log(log n) method held on for 36 years. In 2007 Fürer beat it and the floodgates opened. Over the past decade, mathematicians have found successively faster multiplication algorithms, each of which has inched closer to n × log n, without quite reaching it. Then last month, Harvey and van der Hoeven got there.

    Their method is a refinement of the major work that came before them. It splits up digits, uses an improved version of the fast Fourier transform, and takes advantage of other advances made over the past forty years. “We use [the fast Fourier transform] in a much more violent way, use it several times instead of a single time, and replace even more multiplications with additions and subtractions,” van der Hoeven said.

    Harvey and van der Hoeven’s algorithm proves that multiplication can be done in n × log n steps. However, it doesn’t prove that there’s no faster way to do it. Establishing that this is the best possible approach is much more difficult. At the end of February, a team of computer scientists at Aarhus University posted a paper arguing that if another unproven conjecture is also true, this is indeed the fastest way multiplication can be done.

    And while the new algorithm is important theoretically, in practice it won’t change much, since it’s only marginally better than the algorithms already being used. “The best we can hope for is we’re three times faster,” van der Hoeven said. “It won’t be spectacular.”

    In addition, the design of computer hardware has changed. Two decades ago, computers performed addition much faster than multiplication. The speed gap between multiplication and addition has narrowed considerably over the past 20 years to the point where multiplication can be even faster than addition in some chip architectures. With some hardware, “you could actually do addition faster by telling the computer to do a multiplication problem, which is just insane,” Harvey said.

    Hardware changes with the times, but best-in-class algorithms are eternal. Regardless of what computers look like in the future, Harvey and van der Hoeven’s algorithm will still be the most efficient way to multiply.

    #mathematiques #multiplication

  • Acing the algorithmic beat, journalism’s next frontier » Nieman Journalism Lab

    Algorithms shape large parts of everyday life: our interactions with other people, what products we purchase, the information we see (or don’t see), our investment decisions and our career paths. And we trust their judgment: people are more likely to follow advice when they are being told that it came from an algorithm rather than a human, according to a Harvard Business School study.

    Machines make mistakes

    Despite our growing reliance on algorithms, the Pew Research Center found that Americans are concerned with the fairness and effectiveness of computer programs that make important decisions in their lives. 58 percent feel that algorithms are likely to reflect some level of human bias.

    And they’re right. Even though algorithms can seem “objective” and can sometimes even outperform human judgment, they are still fallible. The notion that algorithms are neutral because math is involved is deeply flawed. After all, algorithms are based on data created by humans — and humans make mistakes and have biases. That’s why American mathematician Cathy O’Neil says: “Algorithms are opinions embedded in code.”

    Machine bias can have grave consequences. A hiring algorithm at a large tech company might teach itself to prefer male applicants over female applicants. Policing software that conducts risk assessments might be biased against black people. And a content recommendation algorithm might amplify conspiracy theories.

    #Algorithmes #Journalisme #Médias

  • Unraveling the Myths of #cardano’s Nomenclature

    Ever wondered why the ticker of Cardano’s cryptocurrency is “ADA”, or why its wallets are named after Greek myths? What is the project named after in the first place, and who are all the people mentioned in the roadmap release titles? This article will provide a bird’s eye view and links for further reading.Custom painting that includes Ada Lovelace, Icarus and Daedalus’ labyrinth and Minotaur (source)Gerolamo CardanoGerolamo Cardano (1501–1576) was an Italian mathematician, physicist, biologist, physician, chemist, astrologist, philosopher, writer and gambler. During his lifetime he wrote over 200 scientific works and was one of the key figures in the mathematical field of probability during the Renaissance.Gerolamo Cardano (source)For a cryptocurrency project that is looking to utilize (...)

    #mathematics #blockchain #mythology #cardano-nomenclature

  • When the Heavens Stopped Being Perfect - Issue 58: Self

    I have in my hand a little book titled The Starry Messenger (Sidereus Nuncius in its original Latin), written by the Italian mathematician and scientist Galileo Galilei in 1610. There were 550 books in the first printing of Messenger. One hundred and fifty still remain. A few years ago, Christie’s valued each first edition at between $600,000 and $800,000. My paperback copy was printed in 1989 for about $12. Although the history of science has not awarded Messenger the same laurels as Newton’s Principia or Darwin’s On the Origin of Species, I regard it as one of the most consequential volumes of science ever published. In this little book, Galileo reports what he saw after turning his new telescope toward the heavens: strong evidence that the heavenly bodies are made of ordinary (...)

  • Inventing the Mathematician, Gender, Race, and Our Cultural Understanding of Mathematics


    Where and how do we, as a culture, get our ideas about mathematics and about who can engage with mathematical knowledge? Sara N. Hottinger uses a cultural studies approach to address how our ideas about mathematics shape our individual and cultural relationship to the field. She considers four locations in which representations of mathematics contribute to our cultural understanding of mathematics: mathematics textbooks, the history of mathematics, portraits of mathematicians, and the field of ethnomathematics. Hottinger examines how these discourses shape mathematical subjectivity by limiting the way some groups—including women and people of color—are able to see themselves as practitioners of math. Inventing the Mathematician provides a blueprint for how to engage in a deconstructive project, revealing the limited and problematic nature of the normative construction of mathematical subjectivity.

    #mathématiques #genre #historicisation #racisme ... bon je suis nul en tags ce soir

  • The Shallowness of Google Translate - The Atlantic

    Un excellent papier par Douglas Hofstadter (ah, D.H., Godel, Escher et Bach... !!!)

    As a language lover and an impassioned translator, as a cognitive scientist and a lifelong admirer of the human mind’s subtlety, I have followed the attempts to mechanize translation for decades. When I first got interested in the subject, in the mid-1970s, I ran across a letter written in 1947 by the mathematician Warren Weaver, an early machine-translation advocate, to Norbert Wiener, a key figure in cybernetics, in which Weaver made this curious claim, today quite famous:

    When I look at an article in Russian, I say, “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.”

    Some years later he offered a different viewpoint: “No reasonable person thinks that a machine translation can ever achieve elegance and style. Pushkin need not shudder.” Whew! Having devoted one unforgettably intense year of my life to translating Alexander Pushkin’s sparkling novel in verse Eugene Onegin into my native tongue (that is, having radically reworked that great Russian work into an English-language novel in verse), I find this remark of Weaver’s far more congenial than his earlier remark, which reveals a strangely simplistic view of language. Nonetheless, his 1947 view of translation-as-decoding became a credo that has long driven the field of machine translation.

    Before showing my findings, though, I should point out that an ambiguity in the adjective “deep” is being exploited here. When one hears that Google bought a company called DeepMind whose products have “deep neural networks” enhanced by “deep learning,” one cannot help taking the word “deep” to mean “profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning of “deep” in this context comes simply from the fact that these neural networks have more layers (12, say) than do older networks, which might have only two or three. But does that sort of depth imply that whatever such a network does must be profound? Hardly. This is verbal spinmeistery .

    I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:

    In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.

    The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:

    Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.

    We humans know all sorts of things about couples, houses, personal possessions, pride, rivalry, jealousy, privacy, and many other intangibles that lead to such quirks as a married couple having towels embroidered “his” and “hers.” Google Translate isn’t familiar with such situations. Google Translate isn’t familiar with situations, period. It’s familiar solely with strings composed of words composed of letters. It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things. Let me hasten to say that a computer program certainly could, in principle, know what language is for, and could have ideas and memories and experiences, and could put them to use, but that’s not what Google Translate was designed to do. Such an ambition wasn’t even on its designers’ radar screens.

    It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are. It’s almost irresistible for people to presume that a piece of software that deals so fluently with words must surely know what they mean. This classic illusion associated with artificial-intelligence programs is called the “Eliza effect,” since one of the first programs to pull the wool over people’s eyes with its seeming understanding of English, back in the 1960s, was a vacuous phrase manipulator called Eliza, which pretended to be a psychotherapist, and as such, it gave many people who interacted with it the eerie sensation that it deeply understood their innermost feelings.

    To me, the word “translation” exudes a mysterious and evocative aura. It denotes a profoundly human art form that graciously carries clear ideas in Language A into clear ideas in Language B, and the bridging act not only should maintain clarity, but also should give a sense for the flavor, quirks, and idiosyncrasies of the writing style of the original author. Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it—to “press it out”—in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

    This process, mediated via meaning, may sound sluggish, and indeed, in comparison with Google Translate’s two or three seconds per page, it certainly is—but it is what any serious human translator does. This is the kind of thing I imagine when I hear an evocative phrase like “deep mind.”

    A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more “big data” won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today. So I would venture that bigger databases—even vastly bigger ones—won’t turn the trick.

    Another natural question is whether Google Translate’s use of neural networks—a gesture toward imitating brains—is bringing us closer to genuine understanding of language by machines. This sounds plausible at first, but there’s still no attempt being made to go beyond the surface level of words and phrases. All sorts of statistical facts about the huge databases are embodied in the neural nets, but these statistics merely relate words to other words, not to ideas. There’s no attempt to create internal structures that could be thought of as ideas, images, memories, or experiences. Such mental etherea are still far too elusive to deal with computationally, and so, as a substitute, fast and sophisticated statistical word-clustering algorithms are used. But the results of such techniques are no match for actually having ideas involved as one reads, understands, creates, modifies, and judges a piece of writing.

    Let me return to that sad image of human translators, soon outdone and outmoded, gradually turning into nothing but quality controllers and text tweakers. That’s a recipe for mediocrity at best. A serious artist doesn’t start with a kitschy piece of error-ridden bilgewater and then patch it up here and there to produce a work of high art. That’s not the nature of art. And translation is an art.

    In my writings over the years, I’ve always maintained that the human brain is a machine—a very complicated kind of machine—and I’ve vigorously opposed those who say that machines are intrinsically incapable of dealing with meaning. There is even a school of philosophers who claim computers could never “have semantics” because they’re made of “the wrong stuff” (silicon). To me, that’s facile nonsense. I won’t touch that debate here, but I wouldn’t want to leave readers with the impression that I believe intelligence and understanding to be forever inaccessible to computers. If in this essay I seem to come across sounding that way, it’s because the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.

    From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful, and, as a corollary, able to translate admirably between languages. There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that’s not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind’s profundity fervently hopes.

    When, one day, a translation engine crafts an artistic novel in verse in English, using precise rhyming iambic tetrameter rich in wit, pathos, and sonic verve, then I’ll know it’s time for me to tip my hat and bow out.

    #Traduction #Google_translate #Deep_learning

  • Mathematical secrets of ancient #tablet unlocked after nearly a century of study | #Science | The Guardian



    “A treasure trove of Babylonian tablets exists, but only a fraction of them have been studied yet. The mathematical world is only waking up to the fact that this ancient but very sophisticated mathematical culture has much to teach us.”

    They suggest that the mathematics of Plimpton 322 indicate that it originally had six columns and 38 rows. They believe it was a working tool, not – as some have suggested – simply a teaching aid for checking calculations. “Plimpton 322 was a powerful tool that could have been used for surveying fields or making architectural calculations to build palaces, temples or step pyramids,” Mansfield said.

    As far back as 1945 the Austrian mathematician Otto Neugebauer and his associate Abraham Sachs were the first to note that Plimpton 322 has 15 pairs of numbers forming parts of Pythagorean triples: three whole numbers a, b and c such that a squared plus b squared equal c squared. The integers 3, 4 and 5 are a well-known example of a Pythagorean triple, but the values on Plimpton 322 are often considerably larger with, for example, the first row referencing the triple 119, 120 and 169.


    via https://diasp.eu/posts/5951983

    #mathématique #calculateur #antiquité #Babylone

  • What is the alt right? A linguistic data analysis of 3 billion Reddit comments shows a disparate group that is quickly uniting — Quartz

    We’re witnessing the radicalization of young white men through the medium of frog memes. In order to see it, all you need to do is look at the words coming out of their mouths. The alt-right isn’t yet united, but it soon will be.

    #text-mining #trolls #alt-right #Trump #reddit

  • Pentagon Tiling Proof Solves Century-Old Math Problem | Quanta Magazine

    A French mathematician has completed the classification of all convex pentagons, and therefore all convex polygons, that tile the plane.

    Rao said he felt disappointed not to have discovered any additional families, but tiling experts say that proving a complete list of 15 is more significant than simply finding a new working example.

    #mathématiques #géométrie #pavage #pentagones #déception

  • The World according to Eratosthenes


    Projection: Unknown,
    Source Bounding Coordinates:
    W: E: N: S:

    Description: A facsimile of the world map by Eratosthenes (around 220 BC). Eratosthenes is the ancient Greek mathematician and geographer attributed with devising the first system of Latitude and Longitude. He was also the first know person to calculate the circumference of the earth. This is a facsimile of the map he produced based on his calculations. The map shows the routes of exploration by Nearchus from the mouth of the Indus River (325 BC, after the expedition to India by Alexander the Great), and Pytheas (300 BC) to Britannia. Place names include Hellas (Greece), Pontus Euxinus (Black Sea), Mare Caspium (Caspian Sea), Gades (Cadiz), Columnæ Herculis (Gibraltar), Taprobane (Sri Lanka), Iberes (Iberian peninsula), Ierne (Ireland), and Brettania (Britain), the rivers Ister (Danube), Oxus (Amu Darya), Ganges, and Nilus (Nile), and mountain systems. The map shows his birthplace in Libya (Cyrene), the Egyptian cities of Alexandria and Syene (Aswan) where Eratosthenes made his calculations of the earth’s circumference, and the latitudes and longitudes of several locations based on his measurements in stadia.

    Place Names: A Complete Map of Globes and Multi-continent, Europa, -Libya, -Asia, -India, -Scythia, -Arabi
    ISO Topic Categories: society
    Keywords: The World according to Eratosthenes, physical, -historical, kEarlyMapsFacsimile, physical features, topographical, society, Unknown, 220 BC
    Source: Ernest Rhys, Ed., A Literary and Historical Atlas of Asia (New York, NY: E.P. Dutton & CO., 1912) 2
    Map Credit: Courtesy the private collection of Roy Winkelman

    #cartographie #cartographie_ancienne

  • The Hoax That Backfired : How an Attempt to Discredit Gender Studies Will Only Strengthen It - Pacific Standard

    Heureusement que les belles foutaises se retournent parfois contre leurs auteurs... mais trop significatif de l’air du temps, anti-science d’une part et anti-femmes de l’autre.

    The most recent stunt to roil the academic waters took about 3,000 words and focused on the penis. The authors, Peter Boghossian and James Lindsay—a philosopher and a mathematician—co-authored a purposefully bogus paper ("The Conceptual Penis as a Social Construct") in which they promoted the proposition that “The penis vis-à-vis maleness is an incoherent construct.”

    The piece, as intended, is complete nonsense. Parodying postmodern jargon, the authors explain how "penises are not best understood as the male sexual organ, but instead as an enacted social construct.

    The spoof was accepted by a peer-reviewed journal called Cogent Social Sciences. Needless to say, the authors’ revelation of their hoax rankled critics supportive of gender studies. More than any other point, the critics argued that the open-access journal that accepted the article was a pay-to-publish junk job, and therefore not an accurate reflection of the discipline itself.

    This is the rhetoric of humiliation. According to Neel Burton, writing in Psychology Today: “To humiliate someone is to assert power over him by denying and destroying his status claims. To this day, humiliation remains a common form of punishment, abuse, and oppression.” Humiliation, furthermore, can also serve to “enforce a particular social order.” It follows that, in light of these motives, “humiliating someone, even a criminal, is rarely, if ever, a proportionate or justified response.” It is, most critically, a fundamentally different beast than embarrassment.

    In the most recent scholarly effort to define humiliation precisely, the authors conclude: “humiliation is defined by feeling powerless, small, and inferior in a situation in which one is brought down and in which an audience is present – which may contribute to these diminutive feelings – leading the person to appraise the situation as unfair and resulting in a mix of emotions, most notably disappointment, anger, and shame.”

    This, I would suggest, is what Boghossian and Lindsay were attempting to achieve when they submitted their bogus article for publication. They wanted to do something completely different than discredit the entire field of gender studies. They wanted to humiliate all those who are in it. Which is to say, they were being bullies.

    #gender_studies #open_access #air_du_temps

  • Essay on Science: Shadows of Evidence | Simons Foundation
    (texte de février 2013)

    The Evidence of Coincidence

    In the early 1970s, the mathematician John McKay made a simple observation. He remarked that 

    196,884 = 1 + 196,883 

    What is peculiar about this formula is that the left-hand side of the equation, i.e., the number 196,884, is well known to most practitioners of a certain branch of mathematics (complex analysis, and the theory of modular forms), (3) while 196,883, which appears on the right, is well known to most practitioners of what was in the 1970s quite a different branch of mathematics (the theory of finite simple groups). (4) McKay took this “coincidence” — the closeness of those two numbers (5) — as evidence that there had to be a very close relationship between these two disparate branches of pure mathematics, and he was right! Sheer coincidences in math are often not merely sheer; they’re often clues — evidence of something missing, yet to be discovered.
    (3) 196,884 is the first interesting coefficient of a basic function in that branch of mathematics: the elliptic modular function.

    (4) 196,883 is the smallest dimension of a Euclidean space that has the largest sporadic simple group (the monster group) as a subgroup of its symmetries.

    (5) McKay gave a convincing interpretation of the 1 in the formula as well.

    #beauté_des_maths !

  • The Tangled History of Big Bang Science - Facts So Romantic

    Diagram outlining the critical stages of evolution of the Universe from the Big Bang to the present.CERN / FlickrFor a theory of the universe as successful as the Big Bang, it may come as a surprise to realize how many complications its promoters had to stumble through. Let’s begin with the unfortunate figure of Alexander Friedmann, the brilliant Russian mathematician and meteorologist who was the first to exploit something remarkable about Einstein’s “field equations,” the set of ten equations that reimagined gravity as an outcome of curved spacetime. In 1917, Einstein, perhaps to not seem too out-there, argued that one could use the field equations to derive a model of the universe very much like the traditional Newtonian view—an eternally static, or “steady-state,” cosmos. The (...)

  • Coast Lines - Geographical


    Coasts are a more complex geographical entity than you might believe. Benjamin Hennig maps the world’s coasts to learn more

    The question about the length of the world’s coastlines is not as easy to answer as it would initially seem. The British mathematician and meteorologist Lewis Fry Richardson was among the first to investigate this phenomenon of the fractal nature of boundary lines in the early twentieth century. The rougher coastlines are, the more of a fractal nature they have and the more difficult it becomes to determine their length since this changes when looking at it using different scales and resolutions.

    Richardson’s fellow mathematician, Benoit Mandelbrot, further investigated this phenomenon by looking at the length of the coast of Britain. He explained how the length of a coastline increases the smaller the ruler used for measuring. This has become known as the Coastline Paradox, since it suggests that the length of a coastline theoretically is infinite, or undefinable.

    #cartographie #sémiologie #généralisation côtes

  • Why Blind People Are Better at Math - Facts So Romantic

    Bernard Morin developed glaucoma at an early age and was blind by the time he was six years old. Despite his inability to see, Morin went on to become a master topologist—a mathematician who studies the intrinsic properties of geometric forms in space—and earned renown for his visualization of an inside-out sphere. For sighted people, it can be difficult to imagine learning math, let alone mastering it, without vision (or even with it). In grade schools, mathematics instruction tends to rely heavily on visual aids—our fingers, pieces of pie, and equations scribbled on paper. Psychology and neuroscience support the notion that math and sight are tightly intertwined. Studies show that mathematical abilities in children are highly correlated with their visuospatial capacities—measured by (...)

  • An “Infinitely Rich” Mathematician Turns 100 - Facts So Romantic

    At the Hotel Parco dei Principi in Rome, in September of 1973, the Hungarian mathematician Paul Erdős approached his friend Richard Guy with a request. He said, “Guy, veel you have a coffee?” It cost a dollar, a small fortune to a professor of mathematics at the hinterland University of Calgary who was not much of a coffee drinker. Yet, as Guy later recalled—during a memorial talk following Erdős’s death at age 83 two decades ago—he was curious why the great man had sought him out. Guy and Erdős were in the Eternal City for an international colloquium on combinatorial theory, so Erdős—who sustained himself with espresso and other stimulants, worked on math problems 19 hours a day, and in his lifetime published in excess of 1,500 papers with more than 500 collaborators—most likely had another (...)

  • Why Physics Is Not a Discipline - Issue 35: Boundaries

    Have you heard the one about the biologist, the physicist, and the mathematician? They’re all sitting in a cafe watching people come and go from a house across the street. Two people enter, and then some time later, three emerge. The physicist says, “The measurement wasn’t accurate.” The biologist says, “They have reproduced.” The mathematician says, “If now exactly one person enters the house then it will be empty again.” Hilarious, no? You can find plenty of jokes like this—many invoke the notion of a spherical cow—but I’ve yet to find one that makes me laugh. Still, that’s not what they’re for. They’re designed to show us that these academic disciplines look at the world in very different, perhaps incompatible ways.Instructive: Phase transitions in physical systems, like that between water vapor (...)

  • Story of cities #5: Benin City, the mighty medieval capital now lost without trace
    With its mathematical layout and earthworks longer than the Great Wall of China, Benin City was one of the best planned cities in the world when London was a place of ‘thievery and murder’. So why is nothing left?