• How Google made the world go viral - The Verge
    https://www.theverge.com/23846048/google-search-memes-images-pagerank-altavista-seo-keywords

    he first thing ever searched on Google was the name Gerhard Casper, a former Stanford president. As the story goes, in 1998, Larry Page and Sergey Brin demoed Google for computer scientist John Hennessy. They searched Casper’s name on both AltaVista and Google. The former pulled up results for Casper the Friendly Ghost; the latter pulled up information on Gerhard Casper the person.

    What made Google’s results different from AltaVista’s was its algorithm, PageRank, which organized results based on the amount of links between pages. In fact, the site’s original name, BackRub, was a reference to the backlinks it was using to rank results. If your site was linked to by other authoritative sites, it would place higher in the list than some random blog that no one was citing.

    Google officially went online later in 1998. It quickly became so inseparable from both the way we use the internet and, eventually, culture itself, that we almost lack the language to describe what Google’s impact over the last 25 years has actually been. It’s like asking a fish to explain what the ocean is. And yet, all around us are signs that the era of “peak Google” is ending or, possibly, already over.

    This year, The Verge is exploring how Google Search has reshaped the web into a place for robots — and how the emergence of AI threatens Google itself.

    What happens when Google Search doesn’t have the answers?
    How Google tried to fix the web — by taking it over
    The store is for humans, the storefront is for robots
    The little search engine that couldn’t
    Who killed Google Reader?

    There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta’s Facebook and Instagram began eating the web in the 2010s. More recently, there’s been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users.

    For two decades, Google Search was the largely invisible force that determined the ebb and flow of online content. Now, for the first time since Google’s launch, a world without it at the center actually seems possible. We’re clearly at the end of one era and at the threshold of another. But to understand where we’re headed, we have to look back at how it all started.

    #Google #Histoire_numérique #Super

  • 30 years ago this month, the Mosaic web browser officially launched and changed the world - Neowin
    https://www.neowin.net/news/30-years-ago-this-month-the-mosaic-web-browser-officially-launched-and-chang

    It’s sometimes difficult to label a product or service that truly changed the world after it was released. However, it definitely can be said that the release of the Mosaic web browser did just that. After a preliminary release in January 1993, version 1.0 of Mosaic was launched 30 years ago this month, on April 22, 1993.

    Let’s get this part out of the way: Mosaic was not the first web browser ever released. That honor belongs to WorldWideWeb, which was launched a few years before in 1990 by developer Tim Berners-Lee when he worked at CERN. Later other browsers like Viola and Cello were launched. However, Mosaic was different.

    The browser was first developed by Marc Andreessen and Eric Bina when they were graduate students at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. Unlike earlier web browsers, which showed text and images in separate windows, Mosaic’s biggest innovation was that it was capable of showing both text and images in the same window. It made looking at websites like you were reading a magazine page.

    Mosaic also let users click on hyperlinks to go to other pages or sites, instead of manually typing a URL address. It had a user interface design that was easy to understand. The now familiar buttons for going back or forward through sites, or refreshing a page, were in place with Mosaic.

    While originally launched for Unix systems, Mosaic versions were released later in 1993 for Windows and Macintosh. The need for a closed internet service like AOL, Compuserve, Prodigy, and the other online services that popped up in the 1980s started to slowly go away. All you needed to access the internet was an ISP and Mosaic installed. The NCSA’s official Mosaic website states that by December 1993 “more than 5,000 copies of the browser were being downloaded a month and the center was receiving hundreds of thousands of email inquiries a week.” Keep in mind this is an era where most homes could only connect to the internet with a 28.8k phone modem.

    Starting in 1994, the US National Science Foundation started funding for further development of Mosaic. However, even then the writing was on the wall for the web browser’s shutdown. Later that year, Mosaic’s co-creator Marc Andreessen left NCSA and helped to found Mosaic Communications Corporation.

    That company released its first browser, called Mosaic Netscape, in late 1994. The NCSA threatened legal action at the new company for using the Mosaic brand for the browser and company. The browser was finally named Netscape Navigator, and the company was renamed Netscape Communications Corporation.

    Netscape quickly became the browser of choice for most Internet users, which meant Mosaic was being downloaded and used less and less. As a result, in January 1997, the NCSA shut down the development of the web browser. Of course, Netscape soon had to deal with Microsoft’s efforts with its own Internet Explorer browser, but that is a whole different story that we may write about another day. Netscape met its eventual end in 2008.

    While the NCSA did shut down its development, the organization continues to be proud of Mosaic. 10 years after its official launch, it held a birthday party of sorts for the browser. Rick Rashid, the founder of Microsoft Research was one of the event’s guests.

    While Mosaic was ultimately a short-lived computer application, it certainly had a massive influence on the internet in general, and the entire world as a whole. Even in this world of apps and social networks, websites continue to be the primary way we get information online. Mosaic’s features of combining words and images on one web page, its use of embedded hyperlinks, and its standard UI are the basics for all web browsers released afterwards. The National Science Foundation’s article on Mosaic, posted in 2004, sums up its impact.

    “Without Mosaic, Web browsers might not have happened or be what they are today,” said Peter Freeman, NSF assistant director for CISE. “The growth of the Web and its impact on daily life shows the kind of dramatic payoff that NSF investments in computer science research can have for all areas of science and engineering, education and society as a whole.”

    That impact remains true in 2023, and it will likely continue to be felt for years to come.

    #Histoire_numerique #Mosaic #Browsers #Web

  • Opinion | The 25 Tweets That Show How Twitter Changed the World - The New York Times
    https://www.nytimes.com/interactive/2023/02/10/opinion/twitter-all-time-tweets.html

    On Wednesday, Twitter announced that users who pay extra will be able to send their thoughts into the world in tweets of up to 4,000 characters, instead of 280 or less. A few hours later, the site glitched. Users couldn’t tweet; they couldn’t DM; #TwitterDown began trending. All of it — the muddled sense of identity, the breakdown of basic function — confirmed the sense that Twitter, a site that has hosted the global conversation for almost two decades, had become a rickety shell of itself, that its best days were behind it and that it would never be as significant again.

    But what, exactly, is being lost? We wanted to capture the ways that Twitter — a platform used by a tiny percentage of the world’s population — changed how we protest, consume news, joke and, of course, argue. So we set ourselves to the task of sorting through the trillions of tweets sent since 2006 to determine which were just noise and which deserved a place in the history books. And then we asked: Could we maybe even … rank them?

    What you see below is our list, compiled with the help of experts, of the 25 most important tweets. Like all such rankings, we hope it can serve as a starting point for discussions and arguments, both on Twitter and off. What was ranked too high? Too low? What did we leave off?

    Yes, we know: There’s something a little absurd about this exercise. Twitter contains such a wide range of humanity: How do you rank the tweet that got Justine Sacco canceled against the tweet that ignited #MeToo?

    And yet this list tells a bigger story about how 17 years of messy, vibrant, sometimes ugly, always lively conversation has shaped the world. Just where did “Hope I don’t get AIDS. Just kidding. I’m white!” land compared with “If you’ve been sexually harassed or assaulted write ‘me too’ as a reply to this tweet”? You’ll have to scroll to find out.

    How We Did It: To compile this ranking, Times Opinion rounded up a group of panelists with widely varied backgrounds but one thing in common: They know a lot about Twitter. (The full list of panelists is at the bottom of the article.) We asked them to submit tweets they thought were good candidates for the most important of all time, with the only criterion being that the tweets had to be in English. We used these to create a list, then sent that list back out to our panelists with instructions to rank the tweets in order of importance and to share their insights about them: why they thought a tweet was important or why it wasn’t. We then crunched the numbers and compiled their insights, edited for content and clarity, into the list you see here.

    #Twitter #Histoire_numerique

  • Sur quelles technologies les métavers reposent-ils ?
    https://theconversation.com/sur-quelles-technologies-les-metavers-reposent-ils-177934

    Par Pascal Guitton et Nicolas Roussel

    En octobre 2021, Facebook a annoncé le développement d’un nouvel environnement virtuel baptisé Metaverse. Cette information a entraîné de nombreuses réactions tant sous la forme de commentaires dans les médias que de déclarations d’intention dans les entreprises. Comme souvent face à une innovation technologique, les réactions sont contrastées : enfer annoncé pour certains, paradis pour d’autres. Qu’en penser ?
    De quoi parle-t-on ?

    Le concept de métavers vient de la littérature de science-fiction. Le terme est apparu la première fois dans un roman de 1992, Le samouraï virtuel de Neal Stephenson, pour décrire un univers généré par ordinateur auquel on accède à l’aide de lunettes et d’écouteurs. D’autres romans avaient auparavant décrit des mondes virtuels plus ou moins similaires sous d’autres termes : simulateur dans un roman de Daniel F. Galouye de 1968, ou cyberespace dans les romans de William Gibson du début des années 1980, par exemple.

    Les premières réalisations concrètes de ce concept remontent aux années 1990-1995 pour Active Worlds, aux États-Unis, ou 1997 pour Le deuxième monde, en France. Elles ont longtemps été limitées par les capacités techniques du moment.

    #Metaverse #Métavers #Histoire_numérique

  • Bill Gates Predicts the Future in a Rediscovered Microsoft Video from 1994 – The New Stack
    https://thenewstack.io/bill-gates-predicts-the-future-in-a-rediscovered-microsoft-video-from-19

    The Future

    Screenshot from 1994 Welcome to Microsoft video (via Computer History Archive Project’s YouTube channel

    In a weird way, Gates misses part of the future, imagining a burgeoning market for software publishing but without foreseeing the ultimate prominence of online applications — and, eventually, data storage and at-scale data analytics. And “It was that ‘computer on every desk’ that led them to miss the smartphone,” quips one YouTube commenter.

    Gates seems to be thinking about a world where we gather together in our cozy dens, enjoying space simulators and family-finance spreadsheets. There he is, wearing an earth-toned sweater and sharing a wholesome vision of making products that are enjoyable and useful and “draw in the whole family, so that these devices in the home are really something that are very, very worthwhile.”

    There’s a comforting optimism in this moment, as the announcer concludes this segment on the future by saying cheerily that “The Microsoft vision of a computer on every desk and in every home — in this country and around the world — is becoming a reality.” Of course, what the video didn’t mention would be the release of Windows 95 in the following year, which in fact brought Microsoft very close to this goal, a goal that seemed radical then and quaint now.

    With breezy corporate enthusiasm, the announcer anticipates, without a hint of concern, a world where “a vast universe of information will be at the fingertips of everyone.”

    “I think there’s an absolutely incredible opportunity here,” Gates says. “And I think it’s going to be very exciting.”

    #Histoire_numérique #Microsoft #Vidéo

  • Bringing new life to the Altair 8800 on Azure Sphere - Microsoft Tech Community
    https://techcommunity.microsoft.com/t5/internet-of-things/bringing-new-life-to-the-altair-8800-on-azure-sphere/ba-p/2554337
    https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296251iB9FD1335EC54DD9D?v=v2

    Toujours fascinant le rétro-computing.

    I love embedded IoT development and when a colleague asked if I’d be interested in working on cloud-enabling an Altair 8800 emulator running on Azure Sphere then I seized the opportunity. I’m fascinated by tech and retro computing and this project combined both interests and hopefully, it will inspire you to fire up this project, learn about the past and connect to the future.

    The MITS Altair 8800 was built on the Intel 8080. The Intel 8080 was the second 8-bit microprocessor manufactured by Intel, clocked at 2MHz, with a 16-bit address bus to access a whopping 64 KB of RAM. This was back in 1974, yes, that’s 47 years ago. The Altair 8800 is considered to be the computer that sparked the PC revolution and kick-started Microsoft and Apple.

    You can learn more about the Altair at https://en.wikipedia.org/wiki/Altair_8800.

    thumbnail image 2 of blog post titled Bringing new life to the Altair 8800 on Azure Sphere

    Altair 8800 image attribution: File:Altair 8800, Smithsonian Museum.jpg - Wikimedia Commons

    The first release of the Altair 8800 was programmed by flipping switches, then came paper tape readers to load apps, monitors and keyboards, and floppy disk drive storage, revolutionary at the time. The first programming language for the machine was Altair BASIC, the program written by Bill Gates and Paul Allen, and Microsoft’s first product.

    Interest in the Altair 8800 is not new and there are several implementations of the Open-Source Altair 8800 emulator running on various platforms, and if you are keen, you can even buy an Altair 8800 clone. The Altair 8800 running on Azure Sphere builds on Open-Source projects and brings with it something unique, which is to modernize and cloud-enable the Altair. Bringing 21st century cloud technologies to a computer generation that predates the internet.

    #Histoire_numérique #Altair

  • This is how we lost control of our faces | MIT Technology Review
    https://www.technologyreview.com/2021/02/05/1017388/ai-deep-learning-facial-recognition-data-history

    The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.
    by

    Karen Hao
    February 5, 2021

    In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

    Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

    People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

    A history of facial-recognition data

    The researchers identified four major eras of facial recognition, each driven by an increasing desire to improve the technology. The first phase, which ran until the 1990s, was largely characterized by manually intensive and computationally slow methods.

    But then, spurred by the realization that facial recognition could track and identify individuals more effectively than fingerprints, the US Department of Defense pumped $6.5 million into creating the first large-scale face data set. Over 15 photography sessions in three years, the project captured 14,126 images of 1,199 individuals. The Face Recognition Technology (FERET) database was released in 1996.

    The following decade saw an uptick in academic and commercial facial-recognition research, and many more data sets were created. The vast majority were sourced through photo shoots like FERET’s and had full participant consent. Many also included meticulous metadata, Raji says, such as the age and ethnicity of subjects, or illumination information. But these early systems struggled in real-world settings, which drove researchers to seek larger and more diverse data sets.

    In 2007, the release of the Labeled Faces in the Wild (LFW) data set opened the floodgates to data collection through web search. Researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. LFW also relaxed standards around the inclusion of minors, using photos found with search terms like “baby,” “juvenile,” and “teen” to increase diversity. This process made it possible to create significantly larger data sets in a short time, but facial recognition still faced many of the same challenges as before. This pushed researchers to seek yet more methods and data to overcome the technology’s poor performance.

    Then, in 2014, Facebook used its user photos to train a deep-learning model called DeepFace. While the company never released the data set, the system’s superhuman performance elevated deep learning to the de facto method for analyzing faces. This is when manual verification and labeling became nearly impossible as data sets grew to tens of millions of photos, says Raji. It’s also when really strange phenomena start appearing, like auto-generated labels that include offensive terminology.

    Image-generation algorithms are regurgitating the same sexist, racist ideas that exist on the internet.

    The way the data sets were used began to change around this time, too. Instead of trying to match individuals, new models began focusing more on classification. “Instead of saying, ‘Is this a photo of Karen? Yes or no,’ it turned into ‘Let’s predict Karen’s internal personality, or her ethnicity,’ and boxing people into these categories,” Raji says.

    Amba Kak, the global policy director at AI Now, who did not participate in the research, says the paper offers a stark picture of how the biometrics industry has evolved. Deep learning may have rescued the technology from some of its struggles, but “that technological advance also has come at a cost,” she says. “It’s thrown up all these issues that we now are quite familiar with: consent, extraction, IP issues, privacy.”

    Raji says her investigation into the data has made her gravely concerned about deep-learning-based facial recognition.

    “It’s so much more dangerous,” she says. “The data requirement forces you to collect incredibly sensitive information about, at minimum, tens of thousands of people. It forces you to violate their privacy. That in itself is a basis of harm. And then we’re hoarding all this information that you can’t control to build something that likely will function in ways you can’t even predict. That’s really the nature of where we’re at.”

    #Reconnaissance_faciale #éthique #Histoire_numérique #Surveillance

  • YouTube at 15: what happened to some of the platform’s biggest early stars? | Global | The Guardian
    https://www.theguardian.com/global/2020/feb/16/youtube-turns-15-what-happened-to-some-of-the-platfoms-biggest-early-st
    https://i.guim.co.uk/img/media/0385a119995796e0d16c445ec3d52851f020532e/2860_362_4619_2772/master/4619.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    As YouTube celebrates its 15th birthday, we talk to five early adopters about how the all-singing all-dancing platform has evolved
    by Chris Stokel-Walker
    Sun 16 Feb 2020 12.00 GMT Last modified on Fri 21 Feb 2020 18.40 GMT

    Late on the evening of 14 February 2005, Jawed Karim, Chad Hurley and Steve Chen registered the website YouTube.com. Two months later, when the first video (of Karim briefly describing the elephant enclosure at the San Diego Zoo) was uploaded, a platform was launched that has gone on to change the world.

    Today, more than 2bn of us visit YouTube monthly, and 500 hours of footage is uploaded every minute. That’s a far cry from the 18-second video that started it all. Its stars are multi-millionaires: YouTube’s highest earner in 2019 was an eight-year-old called Ryan, who netted $26m. The number of creators earning five or six figures has increased by more than 40% year on year. At first, users earned a few hundred pounds for mentioning products in their videos; now they can make hundreds of thousands, and much more through exclusive brand deals. Not many like talking about their income: it makes them less relatable.

    The scale of viewership has increased, too. It took eight years for the site to get to 1bn monthly users, and another seven to reach 2bn. There are 65% more channels with more than 1m subscribers than a year ago; the number of channels with more than 1bn views grew five times in the past three years.

    As more money and more eyeballs have entered the frame, the level of competition has increased. What was once a site for hobbyists has turned into a mini-Hollywood, with huge teams of staff churning out content for demanding fans.

    As the website celebrates its 15-year anniversary, five of its significant early stars explain how their relationship with the site has evolved.

    #YouTube #Histoire_numérique

  • Windows turns 35: a visual history - The Verge
    https://www.theverge.com/2015/11/19/9759874/microsoft-windows-35-years-old-visual-history

    The PC revolution started off life 35 years ago this week. Microsoft launched its first version of Windows on November 20th, 1985, to succeed MS-DOS. It was a huge milestone that paved the way for the modern versions of Windows we use today. While Windows 10 doesn’t look anything like Windows 1.0, it still has many of its original fundamentals like scroll bars, drop-down menus, icons, dialog boxes, and apps like Notepad and MS paint.

    Windows 1.0 also set the stage for the mouse. If you used MS-DOS then you could only type in commands, but with Windows 1.0 you picked up a mouse and moved windows around by pointing and clicking. Alongside the original Macintosh, the mouse completely changed the way consumers interacted with computers. At the time, many complained that Windows 1.0 focused far too much on mouse interaction instead of keyboard commands. Microsoft’s first version of Windows might not have been well received, but it kick-started a battle between Apple, IBM, and Microsoft to provide computing to the masses.

    #Histoire_numérique #Windows

  • YouTube started as an online dating site - CNET
    https://www.cnet.com/news/youtube-started-as-an-online-dating-site

    Long before Tinder made swiping a thing for matchmaking apps, there was a little-known video site trying to play cupid to the Internet generation: YouTube.

    That’s not exactly what comes to mind when you think of the world’s largest video site, which welcomes a billion visitors a month. But that’s how the YouTube, which Google bought in 2006 for $1.6 billion, got its start, said co-founder Steve Chen.

    “We always thought there was something with video there, but what would be the actual practical application?” Chen said Monday at the South by Southwest tech, film and music conference in Austin, Texas. “We thought dating would be the obvious choice.”

    The idea was for single people to make videos introducing themselves and saying what they were looking for, said Chen. After five days no one had uploaded a single video, so he and the other co-founders, Chad Hurley and Jawed Karim, reconsidered.

    #YouTube #Histoire_numérique #Dating

  • 29 Years Ago Today, The First Web Page Went Live. This Is What It Looked Like | IFLScience
    https://www.iflscience.com/technology/29-years-ago-today-the-first-web-page-went-live-this-is-what-it-looked-l

    In the 1980s, Tim Berners-Lee became frustrated with how information was shared and stored at the European Organization for Nuclear Research (CERN).

    He noticed that, as well as being distributed inefficiently, information was being lost at the organization, largely due to a high turnover of staff. Technical details of old projects could sometimes be lost forever, or else had to be recovered through lengthy investigations in the event of an emergency. Different divisions of CERN used software written in a variety of programming languages, on different operating systems, making the transfer of knowledge cumbersome and time-consuming.

    In response to these annoyances, he made a suggestion in 1989 that would go on to change the world, titled with some lackluster: Information Management: A Proposal. It described a system where all the different divisions of CERN could publish their own part of the experiment, and everyone else could access it. The system would use hypertext to allow people to publish and read the information on any kind of computer. This was the beginning of the World Wide Web.

    The first web page went live 29 years ago today, on August 6, 1991. As such you’ve probably seen people online today linking to the “first-ever” web page.

    29 years ago today, Tim Berners-Lee posted the first ever web page.

    This is ithttps://t.co/MSxAZ2cMZK
    — Russ (@RussInCheshire) August 5, 2020

    If you click the link, this is what you will be greeted with. You’ll probably be instantly confused by the date, as well as the lack of memes and people being incredibly aggressive in the comment section.
    CERN archive.

    While it gives you an idea of what the first web page looked like, we may never know what the actual web page displayed on that day in August, 1991. There are no screenshots, instead what you are seeing is the earliest record we have of that first web page taken in 1992. While we know that when the World Wide Web first launched it contained an explanation of the project itself, hypertext and how to create web pages, the first page of the system designed to prevent the loss of information has ironically been lost, perhaps forever.

    Though in retrospect what Berners-Lee had invented was world-changing, at the time its creators were too pre-occupied with trying to convince their colleagues to realize its value and adopt it to think about archiving their invention for future historians to gawp at.

    “I mean the team at the time didn’t know how special this was, so they didn’t think to keep copies, right?” Dan Noyes, who ran the much larger CERN website in 2013 told NPR. He believes the first incarnation of the world’s first web page is still out there somewhere, probably on a floppy disk or hard drive hanging around in somebody’s house.

    That was how the 1992 version was found.

    “I took a copy of the entire website in a floppy disk on my machine so that I could demonstrate it locally just to show people what it was like. And I ended up keeping a copy of that floppy disk,” Tim Berners-Lee told NPR.

    Unfortunately, despite CERN’s best efforts, the first page itself has not been found. It may never be.

    #Histoire_numérique #Web #Première_page #Tim_Berners_Lee

  • William English, Who Helped Build the Computer Mouse, Dies at 91 - The New York Times
    https://www.nytimes.com/2020/07/31/technology/william-english-who-helped-build-the-computer-mouse-dies-at-91.html?campaig

    William English, the engineer and researcher who helped build the first computer mouse and, in 1968, orchestrated an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today, died on July 26 in San Rafael, Calif. He was 91.

    His death, at a medical facility, was confirmed by his wife, Roberta English, who said the cause was respiratory failure.

    In the late 1950s, after leaving a career in the Navy, Mr. English joined a Northern California research lab called the Stanford Research Institute, or S.R.I. (now known as SRI International). There he met Douglas Engelbart, a fellow engineer who hoped to build a new kind of computer.

    At a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and printouts, Mr. Engelbart envisioned a machine that anyone could use simply by manipulating images on a screen. It was a concept that would come to define the information age, but by his own admission Mr. Engelbart had struggled to explain his vision to others.
    ImageAt a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and print-outs,
    At a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and print-outs,Credit...via English family

    Mr. English, known to everyone as Bill, was one of the few who understood these ideas and who had the engineering talent, patience and social skills needed to realize them. “He was the guy who made everything happen,” said Bill Duvall, who worked alongside Mr. English during those years. “If you told him something needed to be done, he figured out how to do it.”

    After Mr. Engelbart had envisaged the computer mouse and drawn a rough sketch of it on a notepad, Mr. English built it in the mid-1960s. Housed inside a small pinewood case, the device consisted of two electrical mechanisms, called potentiometers, that tracked the movement of two small wheels as they moved across a desktop. They called it a mouse because of the way the computer’s on-screen cursor, called a CAT, seemed to chase the device’s path.

    As they were developing the system, both Mr. English and Mr. Engelbart were part of the government-funded L.S.D. tests conducted by a nearby lab called the International Foundation of Advanced Study. Both took the psychedelic as part of a sweeping effort to determine whether it could “open the mind” and foster creativity.

    Though Mr. Engelbart oversaw the NLS project, the 1968 demonstration in San Francisco was led by Mr. English, who brought both engineering and theater skills to the task. In the mid-1950s he had volunteered as a stage manager for a Bay Area theater troupe called The Actor’s Workshop.

    For the San Francisco event, he used a video projector the size of a Volkswagen Beetle (borrowed it from a nearby NASA lab) to arrange and project the live images behind Mr. Engelbart as he demonstrated NLS from the stage. He had been able to set up the wireless link that sent video between the Menlo Park computer lab and the auditorium after befriending a telephone company technician.
    Image
    Mr. English helped orchestrate an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today.
    Mr. English helped orchestrate an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today.Credit...via English family

    Three years after the demonstration, Mr. English left S.R.I. and joined a new Xerox lab called the Palo Alto Research Center, or PARC. There he helped adapt many of the NLS ideas for a new machine called the Alto, which became a template for the Apple Macintosh, the first Microsoft Windows personal computers and other internet-connected devices.

    #Histoire_numérique #Souris #Bill_English #SRI_international #Xerox_park #Mother_of_all_demo

  • 25 years of PHP: The personal web tools that ended up everywhere • The Register
    https://www.theregister.com/2020/06/08/25_years_of_php

    Feature On 8th June 1995 programmer Rasmus Lerdorf announced the birth of “Personal Home Page Tools (PHP Tools)”.

    The PHP system evolved into one that now drives nearly 80 per cent of websites using server-side programming, according to figures from w3techs.

    Well-known sites running PHP include every Wordpress site (WordPress claims to run “35 per cent of the web”), Wikipedia and Facebook (with caveats - Facebook uses a number of languages including its own JIT-compiled version of PHP called HHVM). PHP is also beloved by hosting companies, many of whom provide their customers with PHPMyAdmin for administering MySQL databases.

    Lerdorf was born in Greenland and grew up in Denmark and Canada. He worked at Yahoo! (a big PHP user) and Etsy. He developed PHP for his own use. “In 1993 programming the web sucked,” he said in a 2017 talk.

    It was CGI written in C and “you had to change the C code and recompile even for a slight change,” he said.

    Perl was “slightly better”, Lerdorf opined, but “you still had to write Perl code to spit out HTML. I didn’t like that at all. I wanted a simple templating language that was built into the web server.”

    The Danish-Canadian programmer’s original idea was that developers still wrote the bulk of their web application in C but “just use PHP as the templating language.” However nobody wanted to write C, said Lerdorf, and people “wanted to do everything in the stupid little templating language I had written, all their business logic.”

    Lerdorf described a kind of battle with early web developers as PHP evolved, with the developers asking for more and more features while he tried to point them towards other languages for what they wanted to do.

    “This is how we got PHP,” he said, “a templating language with business logic features pushed into it.”
    The web’s workhorse

    Such is the penetration of PHP, which Lerdorf said drives around 2 billion sites on 10 million physical machines, that improving efficiency in PHP 7 had a significant impact on global energy consumption. Converting the world from PHP 5.0 to PHP 7 would save 15B Kilowatt hours annually and 7.5B KG less carbon dioxide emissions he said – forgetting perhaps that any unused cycles would soon be taken up by machine learning and AI algorithms.

    PHP is the workhorse of the web but not fashionable. The language is easy to use but its dynamic and forgiving nature makes it accessible to developers of every level of skill, so that there is plenty of spaghetti code out there, quick hacks that evolved into bigger projects. In particular, early PHP code was prone to SQL injection bugs as developers stuffed input from web forms directly into SQL statements, or other bugs and vulnerabilities thanks to a feature called register_globals that was on by default and which will “inject your scripts with all sorts of variables,” according to its own documentation.

    There was originally no formal specification for PHP and it is still described as work in progress. It is not a compiled language and object orientation was bolted on rather than being designed in from the beginning as in Java or C# or Ruby. Obsolete versions of PHP are commonplace all over the internet on the basis that as long as it works, nobody touches it. It has become the language that everyone uses but nobody talks about.

    “PHP is not very exciting and there is not much to it,” said Lerdorf in 2002.

    Regularly coming fourth as the most popular language in the Redmonk rankings, PHP has rarely got a mention in the analysis.

    That said, PHP has strong qualities that account for its popularity and longevity. Some of this is to do with Lerdorf himself, who has continued to steer PHP with wisdom and pragmatism, though it is a community project with thousands of contributors. It lacks corporate baggage and has always been free and open source. “The tools are in the public domain distributed under the GNU Public License. Yes, that means they are free!” said Lerdorf in the first announcement.

    The documentation site is a successful blend of reference and user contributions which means you almost always find something useful there, which is unusual. Most important, PHP is reliable and lightweight, which means it performs well in real-world use even if it does not win in benchmarks.

    25 years is a good run and it is not done yet. ®

    #Histoire_numérique #Languages_informatiques #PHP

  • How Bill Gates described the internet ’tidal wave’ in 1995
    https://www.cnbc.com/2020/05/26/how-bill-gates-described-the-internet-tidal-wave-in-1995.html

    As a pioneer of the personal computer revolution, Microsoft co-founder Bill Gates spent decades working toward his goal of putting “a computer on every desk and in every home.”

    So it’s no surprise that, by the mid-1990s, Gates was among the earliest tech CEOs to recognize the vast promise of the internet for reaching that goal and growing his business.

    Exactly 25 years ago today, on May 26, 1995, Gates wrote an internal memo to Microsoft’s executive staff and his direct reports to extol the benefits of expanding the company’s internet presence. Gates, who was still Microsoft’s CEO at that point, titled his memo, simply: “The Internet Tidal Wave.”

    “The Internet is a tidal wave. It changes the rules. It is an incredible opportunity as well as [an] incredible challenge,” Gates wrote in the memo.

    The point of the memo was that the internet was fast becoming a ubiquitous force that was already changing the way people and businesses communicated with each other on a daily basis.

    “I have gone through several stages of increasing my views of [the internet’s] importance,” Gates told Microsoft’s executive team in the memo, which WIRED magazine re-printed in full in 2010. “Now I assign the internet the highest level of importance.”

    Gates goes on to pinpoint his foremost goal for the memo: “I want to make clear that our focus on the Internet is crucial to every part of our business.”
    1:37
    Watch 33-year old Bill Gates explain his hiring process

    In the memo, Gates explained a bit about how he saw the internet being used in 1995, with both businesses and individuals publishing an increasing amount of content online, from personal websites to audio and video files.

    “Most important is that the Internet has bootstrapped itself as a place to publish content,” Gates wrote. “It has enough users that it is benefiting from the positive feedback loop of the more users it gets, the more content it gets, and the more content it gets, the more users it gets.”

    Gates saw that as one area where Microsoft would have to seize the available opportunities to serve its software customers. He notes that audio and video content could already be shared online in 1995, including in real-time with phone calls placed over the web and even early examples of online video-conferencing.

    While that technology provided exciting opportunities, Gates says, the audio and video quality of those products at the time was relatively poor. “Even at low resolution it is quite jerky,” he wrote of the video quality at that point, adding that he expected the technology to improve eventually “because the internet will get faster.”

    (And, he was certainly correct there, as video-conferencing software has been in increasingly high demand in recent years and is widely in use now by the millions of American workers currently working remotely due to coronavirus restrictions.)

    Gates writes that improving the internet infrastructure to offer higher quality audio and video content online would be essential to unlocking the promise of the internet. While Microsoft’s Office Suite and Windows software were already popular with computer users, Gates argued that they would need to be optimized for use online in order “to make sure you get your data as fast as you need it.”

    “Only with this improvement and an incredible amount of additional bandwidth and local connections will the internet infrastructure deliver all of the promises of the full blown Information Highway,” Gates wrote before adding, hopefully: “However, it is in the process of happening and all we can do is get involved and take advantage.”

    The then-CEO of Microsoft also pushed the need to beef up Microsoft’s own website, where he said customers and business clients should have access to a wealth of information about the company and its products.

    “Today, it’s quite random what is on the home page and the quality of information is very low,” Gates wrote in the 1995 memo. “If you look up speeches by me all you find are a few speeches over a year old. I believe the Internet will become our most important promotional vehicle and paying people to include links to our home pages will be a worthwhile way to spend advertising dollars.”

    Gates told his employees that Microsoft needed to “make sure that great information is available” on the company’s website, including using screenshots to show examples of the company’s software in action.

    “I think a measurable part of our ad budget should focus on the Internet,” he wrote. “Any information we create — white papers, data sheets, etc., should all be done on our Internet server.”

    After all, Gates argued, the internet offered Microsoft a great opportunity to communicate directly with the company’s customers and clients.

    “We have an opportunity to do a lot more with our resources. Information will be disseminated efficiently between us and our customers with less chance that the press miscommunicates our plans. Customers will come to our ‘home page’ in unbelievable numbers and find out everything we want them to know.”

    Of course, in 1995, it wasn’t just Gates’ fellow executives at Microsoft who needed convincing that the internet was the future. In November of that year, Gates went on CBS’s “Late Show with David Letterman” to promote his book “The Road Ahead” and Microsoft’s then-newly launched Internet Explorer, the company’s first web browser.

    Gates touted the possibilities of the World Wide Web in his interview with Letterman, calling the internet “a place where people can publish information. They can have their own homepage, companies are there, the latest information.”

    The comedian wasn’t particularly impressed. “I heard you could watch a live baseball game on the internet and I was like, does radio ring a bell?” Letterman joked.

    That same year, Gates gave an interview with GQ magazine’s UK edition in which he predicted that, within a decade, people would regularly watch movies, television shows and other entertainment online. In fact, 10 years later, in 2005, YouTube was founded, followed two years later by Netflix.

    However, Gates missed the mark when the interviewer suggested that the internet could also become rife with misinformation that could more easily spread to large groups of impressionable people. The Microsoft co-founder was dubious that the internet would become a repository for what might now be described as “fake news,” arguing that having more opportunities to verify information by authorities, such as experts or journalists, would balance out the spread of misinformation.

    #Histoire_numérique #Bill_Gates #Internet

  • Steven Levy : Streaming celebrates its 25th birthday. Here’s how it all began
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc0nqt.1f2j/12bb6811

    So it’s a good time to say happy birthday to streaming media, which just celebrated its 25th anniversary. Two and a half decades ago, a company called Progressive Networks (later called Real Networks) began using the internet to broadcast live and on-demand audio.

    I spoke with its CEO, Rob Glaser, this week about the origins of streaming internet media. Glaser, with whom I have become friendly over the years, told me that he began pursuing the idea after attending a board meeting for a new organization called the Electronic Frontier Foundation in 1993. During the gathering, he saw an early version of Mosaic, the first web browser truly capable of handling images. “A light bulb went off,” Glaser says. “What if it could do the same for audio and video? Anybody could be a broadcaster, and anybody could hear it from anywhere in the world, anytime they wanted to.”

    Glaser believed it was time for a commercial service. When he launched his on April 25, 1995, the first customers were ABC News and NPR; you could listen to news headlines or Morning Edition. It wasn’t the user-friendliest—you had to download his Real Audio app to your desktop and then hope it made a successful connection to the browser. At that point, it worked only on demand. But in September 1995, Progressive Networks began live streaming. Its first real-time broadcast was the audio of a major league baseball game—the Seattle Mariners versus the New York Yankees. (The Mariners won.The losing pitcher was Mariano Rivera, then a starter.) The few who listened from the beginning had to reboot around the seventh inning, as the buffers filled up after two and a half hours or so. By the end of that year, thousands of developers were using Real.

    Other companies began streaming video before Glaser’s, which introduced RealVideo in 1997. The internet at that point wasn’t robust enough to handle high-quality video, but those in the know understood that it was just a matter of time. “It was clear to me that this was going to be the way that everything is going to be delivered,” says Glaser, who gave a speech around then titled “The Internet as the Next Mass Medium.” That same year, Glaser had a conversation with an entrepreneur named Reed Hastings, who told him of his long-range plan to build a business by shipping physical DVDs to people, and then shift to streaming when the infrastructure could support it. That worked out well. Today, our strong internet supports not only entertainment but social programming from YouTube, Facebook, TikTok and others.

    #Histoire_numérique #Streaming

  • The History of the URL
    https://blog.cloudflare.com/the-history-of-the-url

    On the 11th of January 1982 twenty-two computer scientists met to discuss an issue with ‘computer mail’ (now known as email). Attendees included the guy who would create Sun Microsystems, the guy who made Zork, the NTP guy, and the guy who convinced the government to pay for Unix. The problem was simple: there were 455 hosts on the ARPANET and the situation was getting out of control.

    Plus encore que l’histoire des URL, cet article présente tous les divers choix qui ont jalonné l’évolution de l’internet concernant le nommage (des emails, des pages web ou des serveurs).

    The Web Application

    In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.

    #Nommage #URL #Histoire_numérique

  • Fred Turner, Aux sources de l’utopie numérique. De la contre-culture à la cyberculture, Stewart Brand un homme d’influence
    https://journals.openedition.org/questionsdecommunication/8619

    Fred Turner revisite l’histoire des origines intellectuelles et sociales de l’internet en suivant le parcours de Stewart Brand, un « entrepreneur réticulaire » (p. 41). L’ouvrage s’ouvre sur une interrogation : comment se fait-il que le mot révolution soit sur toutes les bouches à l’évocation des technologies numériques alors qu’elles étaient le symbole d’un système inhumain qui a mis le monde au bord de l’apocalypse nucléaire ? Pour y répondre, l’auteur s’attache à retracer les origines de l’utopie numérique dans la trajectoire de Stewart Brand, au croisement des mondes sociaux, des idéologies et des objets technologiques.

    #Fred_Turner #Utopie_numérique #Histoire_numérique

  • George Laurer, co-inventor of the barcode, dies at 94 - BBC News
    https://www.bbc.com/news/world-us-canada-50726950

    George Laurer, the US engineer who helped develop the barcode, has died at the age of 94.

    Barcodes, which are made up of black stripes of varying thickness and a 12-digit number, help identify products and transformed the world of retail.

    They are now found on products all over the world.

    The idea was pioneered by a fellow IBM employee, but it was not until Laurer developed a scanner that could read codes digitally that it took off.

    Laurer died last Thursday at his home in Wendell, North Carolina, and his funeral was held on Monday.

    How the barcode changed retailing
    Inventor of barcode dies aged 91
    Are barcodes the way to protect dementia patients?

    It was while working as an electrical engineer with IBM that George Laurer fully developed the Universal Product Code (UPC), or barcode.

    He developed a scanner that could read codes digitally. He also used stripes rather than circles that were not practical to print.
    Media captionAaron Heslehurst explains how the barcode became a million dollar idea

    The UPC went on to revolutionise “virtually every industry in the world”, IBM said in a tribute on its website.

    In the early 1970s, grocery shops faced mounting costs and the labour-intensive need to put price tags on everything.

    The UPC system used lasers and computers to quickly process items via scanning. This meant fewer pricing errors and easier accounting.

    The first product scanned, in Ohio in June 1974, was a packet of Wrigley’s Juicy Fruit chewing gum. It is now on display at the Smithsonian National Museum of American History in Washington.

    Fellow IBM employee, Norman Woodland, who died in 2012, is considered the pioneer of the barcode idea, which he initially based on Morse code.

    Although he patented the concept in the 1950s, he was unable to develop it. It would take a few more years for Laurer to bring the idea to fruition with the help of low-cost laser and computing technology.

    #Code_a_barre #Histoire_numérique

  • The Earliest Unix Code: An Anniversary Source Code Release - CHM
    https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release

    2019 marks the 50th anniversary of the start of Unix. In the summer of 1969, that same summer that saw humankind’s first steps on the surface of the Moon, computer scientists at the Bell Telephone Laboratories—most centrally Ken Thompson and Dennis Ritchie—began the construction of a new operating system, using a then-aging DEC PDP-7 computer at the labs. As Ritchie would later explain:

    “What we wanted to preserve was not just a good environment to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied from remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”1

    #Histoire_numérique #UNIX

  • Jeu de 7 familles de l’informatique - Interstices
    https://interstices.info/jeu-de-7-familles-de-linformatique

    L’informatique est une science diverse ! Ce jeu est l’occasion de présenter des figures importantes qui ont travaillé et travaillent à façonner la discipline et à la faire évoluer au cours du temps.

    Le format « jeu de 7 familles » permet de mettre en lumière 42 (+1) personnalités, et de montrer que l’Histoire de l’informatique ne se résume pas à celle des ordinateurs. L’informatique se développe à travers une communauté scientifique qui fait vivre la discipline et interagit avec d’autres. Nous proposons dans ce jeu de découvrir 7 de ses grandes thématiques.

    La première famille s’intéresse à « Algorithmes & programmation », concepts à la base de la pensée informatique. La seconde famille présente le lien avec les mathématiques, « Mathématiques & informatique » étant des disciplines intrinsèquement liées. L’informatique, c’est aussi la protection de l’information avec la troisième famille « Sécurité & confidentialité », ainsi que la construction de l’information et sa diffusion, représentées par la quatrième famille « Systèmes & réseaux ». L’informatique n’existe pas sans les machines qui réalisent les calculs, machines que l’on retrouve dans la cinquième famille « Machines & Composants ». Par ailleurs, développer des solutions automatiques en s’inspirant de l’humain, c’est le défi relevé par la sixième famille « Intelligence Artificielle ». Enfin, pour faire de l’informatique, des interfaces facilitant l’accès sont nécessaires, c’est ce qui intéresse la septième famille « Interaction Homme-Machine ».

    Ce jeu a la particularité de posséder un Joker en la personne d’Alan Turing (qui de par la pluralité de ses recherches a sa place dans toutes les familles).

    Dans chaque famille, vous découvrirez six personnalités, classées (en représentation binaire, voir ci-dessous) par ordre croissant d’année de naissance. Nous avons inclus chacune d’elles pour de multiples raisons. Des grands anciens précurseurs, des fondateurs de la discipline, des scientifiques en activité, en passant par les incontournables, nous proposons une vision forcément parcellaire. Mais si vous avez envie de découvrir la science informatique et pourquoi pas rejoindre ses différents acteurs, le pari sera gagné.

    Les personnalités ont été sélectionnées sur la base de plusieurs critères : créer un équilibre entre la représentation des hommes et des femmes, en essayant de ne pas limiter notre panorama à la seule histoire occidentale même si les Français sont bien présents, tout en cherchant à donner une perspective historique de la dynamique scientifique. En effectuant nos choix, nous avons croisé bien d’autres figures importantes souvent méconnues. 7 familles ne sont pas suffisantes pour couvrir toutes les problématiques de l’informatique. Pour cela, il faudrait au moins une extension du jeu avec de nouvelles familles !

    Nous vous invitons à vous amuser avec le jeu et à découvrir toutes les personnalités (ainsi que les objets qui les accompagnent, comme illustration de leurs travaux).

    #Histoire_numérique #Jeu

  • Britain’s £50 Note Will Honor Computing Pioneer Alan Turing - The New York Times
    https://www.nytimes.com/2019/07/15/business/alan-turing-50-pound-note.html

    LONDON — Alan Turing, the computing pioneer who became one of the most influential code breakers of World War II, has been chosen by the Bank of England to be the new face of its 50-pound note.

    The decision to put Mr. Turing on the highest-denomination English bank note, worth about $62, adds to growing public recognition of his achievements. His reputation during his lifetime was overshadowed by a conviction under Britain’s Victorian laws against homosexuality, and his war work remained a secret until decades later.

    “Alan Turing was an outstanding mathematician whose work has had an enormous impact on how we live today,” Mark Carney, the governor of the Bank of England, said in a statement. “As the father of computer science and artificial intelligence, as well as a war hero, Alan Turing’s contributions were far-ranging and path breaking.”

    “Turing is a giant on whose shoulders so many now stand,” Mr. Carney added.

    The central bank announced last year that it wanted to honor someone in the field of science on the next version of the bill, which was last redesigned in 2011, and Mr. Turing was chosen from a list of 227,299 nominees that included Charles Babbage, Stephen Hawking, Ada Lovelace and Margaret Thatcher (who worked as a chemical researcher before entering politics).

    “The strength of the shortlist is testament to the U.K.’s incredible scientific contribution,” Sarah John, the Bank of England’s chief cashier, said in a statement.

    The bank plans to put the new note into circulation by the end of 2021.

    Bank of England bills feature Queen Elizabeth’s face on one side, and a notable figure from British history on the other. Scientists previously honored in this way include Newton, Darwin and the electrical pioneer Michael Faraday. The current £50 features James Watt, a key figure in the development of the steam engine, and Matthew Boulton, the industrialist who backed him.

    Mr. Turing’s work provided the theoretical basis for modern computers, and for ideas of artificial intelligence. His work on code-breaking machines during World War II also drove forward the development of computing, and is regarded as having significantly affected the course of the war.

    Mr. Turing died in 1954, two years after being convicted under Victorian laws against homosexuality and forced to endure chemical castration. The British government apologized for his treatment in 2009, and Queen Elizabeth granted him a royal pardon in 2013.

    The note will feature a photograph of Mr. Turing from 1951 that is now on display at the National Portrait Gallery. His work will also be celebrated on the reverse side, which will include a table and mathematical formulas from a paper by Mr. Turing from 1936 that is recognized as foundational for computer science.

    #Histoire_numérique #Alan_Turing

  • Jean-Marie Hullot, informaticien visionnaire, technologiste exceptionnel | binaire
    http://binaire.blog.lemonde.fr/2019/06/20/jean-marie-hullot-informaticien-visionnaire-technologiste-excep

    Jean-Marie Hullot fut un très grand professionnel de l’informatique. Outre les apports scientifiques du début de sa carrière de chercheur IRIA détaillés plus loin, peu de personnes ont eu des impacts aussi forts et permanents sur l’informatique de Monsieur Tout-le-monde. On lui doit directement les interfaces et interactions graphiques et tactiles modernes, développés d’abord à L’IRIA, puis chez NeXT computers, dont la superbe machine est restée dans les mémoires et a servi en particulier à Tim Berners-Lee pour créer le World Wide Web, et enfin chez Apple à travers le Macintosh et son système MacOSX puis l’iPhone, véritables révolutions dans le domaine qui ont largement engendré le développement de l’informatique conviviale à grande échelle que nous connaissons maintenant, avec en particulier la révolution des smartphones.

    Ces interfaces particulièrement élégantes et intuitives ont marqué une nette rupture avec tout ce qui s’était fait avant, et qu’on a d’ailleurs largement oublié. Il faut bien comprendre qu’elles résultent de la conjonction d’un goût esthétique très sûr et de la création et de la maîtrise de nouvelles architectures de programmation subtiles et éminemment scientifiques, que Jean-Marie Hullot avait commencé à développer lorsqu’il était chercheur à l’IRIA. Un autre apport majeur a été celui des mécanismes de synchronisations d’appareils divers, ici Macs, iPhones et iPads, pour que les calendriers, listes de choses à faire ou autres soient automatiquement à jour dès qu’on les modifie sur un des appareils, sans besoin de la moindre transformation et quels que soient les réseaux utilisés. Cette transparence maintenant habituelle était difficile à réaliser et inconnue ailleurs. Il faut rappeler que le domaine concerné de l’IHM locale et synchronisée est profond et difficile, et les réussites de ce niveau y sont fort rares. Celle de Jean-Marie Hullot chez NeXT puis Apple, particulièrement brillante, a aussi demandé de très nombreuses interactions avec des designers et surtout directement avec Steve Jobs, dont l’exigence de qualité était légendaire.

    Mais, avant sa carrière industrielle, Jean-Marie Hullot a fait bien d’autres apports scientifiques de premier plan. Après l’École normale supérieure de Saint-Cloud, il s’est vite passionné pour la programmation, particulièrement en LISP. Cela s’est passé à l’IRCAM où se trouvait alors le seul ordinateur en France vraiment adapté à la recherche en informatique, le PDP-10 exigé par Pierre Boulez pour monter cet institut. S’y trouvaient en particulier Patrick Greussay, auteur de VLISP et fondateur de l’école française de LISP, et Jérôme Chailloux, auteur principal du système Le_Lisp qui a longtemps dominé la scène française de l’Intelligence Artificielle et auquel Hullot a beaucoup participé et apporté.

    Avec sa rencontre avec Gérard Huet, dont il suivait le cours de DEA à Orsay, il rejoint l’IRIA à Rocquencourt pour son travail doctoral. Il débuta sa recherche en réécriture de termes, problématique issue de la logique mathématique et de l’algèbre universelle, et par suite essentielle aux fondements mathématiques de l’informatique. Parti de l’algorithme de complétion décrit dans l’article séminal de Knuth et Bendix, il réalisa un système complet de complétion de théories algébriques, incluant les dernières avancées en traitement des opérateurs commutatifs et associatifs, permettant la transition avec le calcul des bases polynomiales de Gröbner. Le logiciel KB issu de son travail de thèse avait une algorithmique particulièrement soignée, permettant d’expérimenter avec des axiomatisations non triviales, comme par exemple la modélisation canonique des déplacements du robot de l’Université d’Edimbourg. La renommée de ce logiciel lui valut une invitation d’un an comme chercheur invité au Stanford Research Institute en 1980-1981. Là, en tandem avec Gérard Huet, il développa les fondements de la théorie de la réécriture algébrique, alors en balbutiement. Son article Canonical forms and unification, présenté à l’International Conference on Automated Deduction en 1980, présente un résultat fondamental sur la surréduction qui permit d’établir le théorème de complétude de la procédure de narrowing (Term Rewriting Systems, Cambridge University Press 2003, p. 297.)

    Sa thèse de Doctorat à l’Université Paris XI-Orsay Compilation de formes canoniques dans les théories équationnelles fut soutenue le 14 novembre 1980. Point d’orgue de son travail en algèbre effective, elle devint la bible des chercheurs en réécriture, désormais domaine essentiel de l’informatique fondamentale. Elle fut le premier document technique français composé avec le système de composition TeX, alors en développement par Don Knuth à Stanford, où Jean-Marie Hullot s’y était initié. Il était frappé par l’étonnante qualité graphique des documents traités par TeX, mais aussi des écrans bitmap alors développés au laboratoire PARC de Xerox.

    En 1981 il retrouve l’INRIA à Rocquencourt où démarrait le Projet National Sycomore dirigé par Jean Vuillemin, et que venait de rejoindre Jérôme Chailloux, concepteur du langage Le_Lisp. Il y découvrit le premier Macintosh, ordinateur commercial pionnier profitant des avancées de PARC (bitmap display, interface de fenêtres, ethernet) et du SRI (souris). Mais il a vite trouvé la façon dont ses interfaces étaient programmées assez infernale. Comme c’était l’époque de la naissance des langages objets, il a d’abord décidé de développer le sien au-dessus de Le_Lisp, nommé Ceyx, en privilégiant les aspects dynamiques non présents dans les autres langages de l’époque (il est ensuite passé à Objective C, langage du même type mais bien plus efficace.) Ce langage remarquable, dont l’implémentation était un bijou de simplicité et d’intelligence, a servi notamment à Gérard Berry pour écrire son premier compilateur Esterel.

    Ce travail a débouché sur la création du premier générateur d’interfaces mêlant conception graphique directe et programmation simple, SOS Interfaces. C’est en présentant ce système aux idées très originales dans un séminaire à l’université Stanford qu’il a rencontré Steve Jobs, alors chassé d’Apple, et qui a immédiatement souhaité l’embaucher pour créer sa nouvelle machine NeXT. Même si cette machine n’a pas été un succès commercial, elle reste connue comme probablement la plus élégante jamais fabriquée, et a eu le rôle de précurseur de tout ce qui s’est passé ensuite.

    Jean-Marie Hullot a ensuite pris le leadership des interfaces et interactions du nouveau Macintosh en tant que directeur technique du service des applications d’Apple. Ses créations et celles de son équipe marquent toujours l’informatique moderne. Il a ensuite quitté un moment Apple et la Californie pour s’installer à Paris. Là, Steve Jobs l’a rappelé pour régénérer l’esprit créatif d’Apple, mais il a refusé de revenir en Californie, et proposé plutôt de créer un téléphone, ou plutôt un smartphone comme on dit maintenant. Après quelques difficultés pour convaincre Steve Jobs qui n’y croyait pas trop, il a créé l’iPhone dans un laboratoire secret d’une vingtaine de personnes à Paris. La suite est connue, et assez différente de ce que disait Steve Ballmer lors de la première démonstration par Steve Jobs : « Cet objet n’a aucun avenir industriel » ! Avec plus d’un milliard d’exemplaires vendus, il s’agit probablement d’un des plus grands succès esthétiques et industriels de l’histoire.

    En outre, il mena plusieurs entreprises technologiques en France. La société RealNames qu’il a créé en 1996 avait pour objet de doter le réseau Internet alors en plein essor, mais anarchique au niveau du nommage, d’un espace de nommage standardisé. Plus tard, il chercha à créer une infrastructure ouverte pour le partage de photographies, en suivant le modèle de l’encyclopédie libre Wikipedia , et créa la société Photopedia à cet effet. Ces entreprises n’ont pas été pérennes, mais elles ont permis à de nombreux jeunes professionnels de se former aux technologies de pointe, et d’essaimer à leur tour de nouvelles entreprises technologiques.

    Mathématicien créatif, informaticien visionnaire, programmeur élégant, ingénieur rigoureux, technologiste hors-pair, esthète raffiné, Jean-Marie Hullot aura marqué son époque. Les résultats de son travail ont tout simplement changé le monde à tout jamais. La Fondation Iris, qu’il a créé avec sa compagne Françoise et dont l’objectif est de sauvegarder la fragile beauté du monde, continue de porter son message humaniste : http://fondationiris.org.

    Gérard Berry et Gérard Huet

    #Histoire_numérique #IHM #iPhone #Interface #Synchronisation

  • The most expensive hyphen in history
    https://www.fastcompany.com/90365077/the-most-expensive-hyphen-in-history

    Bugs, bugs bugs

    By Charles Fishman4 minute Read

    This is the 18th in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

    In the dark on Sunday morning, July 22, 1962, NASA launched the first-ever U.S. interplanetary space probe: Mariner 1, headed for Venus, Earth’s neighbor closer to the Sun.

    Mariner 1 was launched atop a 103-foot-tall Atlas-Agena rocket at 5:21 a.m. EDT. For 3 minutes and 32 seconds, it rose perfectly, accelerating to the edge of space, nearly 100 miles up.

    But at that moment, Mariner 1 started to veer in odd, unplanned ways, first aiming northwest, then pointing nose down. The rocket was out of control and headed for the shipping lanes of the North Atlantic. Four minutes and 50 seconds into flight, a range safety officer at Cape Canaveral—in an effort to prevent the rocket from hitting people or land—flipped two switches, and explosives in the Atlas blew the rocket apart in a spectacular cascade of fireworks visible back in Florida.

    The Mariner 1 probe itself was blown free of the debris, and its radio transponder continued to ping flight control for another 67 seconds, until it hit the Atlantic Ocean.

    This was the third failed probe in 1962 alone; NASA had also launched two failed probes to the Moon. But the disappointment was softened by the fact that a second, identical Mariner spacecraft (along with an identical Atlas-Agena rocket) were already in hangers at the Cape, standing by. Mariner 2 was launched successfully a month later and reached Venus on December 14, 1962, where it discovered that the temperature was 797º F and that the planet rotated in the opposite direction of Earth and Mars. The Sun on Venus rises in the West.

    It was possible to launch Mariner 1’s twin just 36 days after the disaster because it took scientists at NASA’s Jet Propulsion Laboratory only five days to figure out what had gone wrong. In handwritten computer coding instructions, in dozens and dozens of lines of flight guidance equations, a single letter had been written incorrectly, probably forgetfully.

    In a critical spot, the equations contained an “R” symbol (for “radius”). The “R” was supposed to have a bar over it, indicating a “smoothing” function; the line told the guidance computer to average the data it was receiving and to ignore what was likely to be spurious data. But as written and then coded onto punch cards and into the guidance computer, the “R” didn’t have a bar over it. The “R-bar” became simply “R.”

    As it happened, on launch, Mariner 1 briefly lost guidance-lock with the ground, which was not uncommon. The rocket was supposed to follow its course until guidance-lock was re-achieved, unless it received instructions from the ground computer. But without the R-bar, the ground computer got confused about Mariner 1’s performance, thought it was off course, and started sending signals to the rocket to “correct” its course, instructions that weren’t necessary—and weren’t correct.

    Therefore “phantom erratic behavior” became “actual erratic behavior,” as one analyst wrote. In the minute or so that controllers waited, the rocket and the guidance computer on the ground were never able to get themselves sorted out, because the “averaging” function that would have kept the rocket on course wasn’t programmed into the computer. And so the range safety officer did his job.

    A single handwritten line, the length of a hyphen, doomed the most elaborate spaceship the U.S. had until then designed, along with its launch rocket. Or rather, the absence of that bar doomed it. The error cost $18.5 million ($156 million today).

    In the popular press, for simplicity, the missing bar became a hyphen. The New York Times front-page headline was “For Want of a Hyphen Venus Rocket Is Lost.” The Los Angeles Times headline: “‘Hyphen’ Blows Up Rocket.” The science fiction writer Arthur C. Clarke, in his 1968 book The Promise of Space, called it “the most expensive hyphen in history.”

    For NASA’s computer programmers, it was a lesson in care, caution, and testing that ended up steeped into their bones. During 11 Apollo missions, more than 100 days total of spaceflight, the Apollo flight computers performed without a single fault.

    But what happened to Mariner 1 was, in fact, an arresting vulnerability of the new Space Age. A single missing bolt in a B-52 nuclear bomber wasn’t going to bring down the plane, but a single inattentive moment in computer programming—of the sort anyone can imagine having—could have a cascade of consequences.

    George Mueller was NASA’s associate administrator for manned spaceflight from 1963 to 1969, the most critical period for Apollo’s development. Just before that, Mueller had been an executive at Space Technology Laboratories, which had responsibility for writing the guidance equations for Mariner 1, including the equation with the missing bar.

    During his years at NASA, Mueller kept a reminder of the importance of even the smallest elements of spaceflight on the wall behind his desk: a framed image of a hyphen.

    #Histoire_numerique #Nasa #Mariner

  • Women Once Ruled Computers. When Did the Valley Become Brotopia? - Bloomberg
    https://www.bloomberg.com/news/features/2018-02-01/women-once-ruled-computers-when-did-the-valley-become-brotopia

    Lena Söderberg started out as just another Playboy centerfold. The 21-year-old Swedish model left her native Stockholm for Chicago because, as she would later say, she’d been swept up in “America fever.” In November 1972, Playboy returned her enthusiasm by featuring her under the name Lenna Sjööblom, in its signature spread. If Söderberg had followed the path of her predecessors, her image would have been briefly famous before gathering dust under the beds of teenage boys. But that particular photo of Lena would not fade into obscurity. Instead, her face would become as famous and recognizable as Mona Lisa’s—at least to everyone studying computer science.

    In engineering circles, some refer to Lena as “the first lady of the internet.” Others see her as the industry’s original sin, the first step in Silicon Valley’s exclusion of women. Both views stem from an event that took place in 1973 at a University of Southern California computer lab, where a team of researchers was trying to turn physical photographs into digital bits. Their work would serve as a precursor to the JPEG, a widely used compression standard that allows large image files to be efficiently transferred between devices. The USC team needed to test their algorithms on suitable photos, and their search for the ideal test photo led them to Lena.
    0718P_FEATURE_BROTOPIA_01
    Lena

    According to William Pratt, the lab’s co-founder, the group chose Lena’s portrait from a copy of Playboy that a student had brought into the lab. Pratt, now 80, tells me he saw nothing out of the ordinary about having a soft porn magazine in a university computer lab in 1973. “I said, ‘There are some pretty nice-looking pictures in there,’ ” he says. “And the grad students picked the one that was in the centerfold.” Lena’s spread, which featured the model wearing boots, a boa, a feathered hat, and nothing else, was attractive from a technical perspective because the photo included, according to Pratt, “lots of high-frequency detail that is difficult to code.”

    Over the course of several years, Pratt’s team amassed a library of digital images; not all of them, of course, were from Playboy. The data set also included photos of a brightly colored mandrill, a rainbow of bell peppers, and several photos, all titled “Girl,” of fully clothed women. But the Lena photo was the one that researchers most frequently used. Over the next 45 years, her face and bare shoulder would serve as a benchmark for image-processing quality for the teams working on Apple Inc.’s iPhone camera, Google Images, and pretty much every other tech product having anything to do with photos. To this day, some engineers joke that if you want your image compression algorithm to make the grade, it had better perform well on Lena.

    “We didn’t even think about those things at all when we were doing this,” Pratt says. “It was not sexist.” After all, he continues, no one could have been offended because there were no women in the classroom at the time. And thus began a half-century’s worth of buck-passing in which powerful men in the tech industry defended or ignored the exclusion of women on the grounds that they were already excluded .

    Based on data they had gathered from the same sample of mostly male programmers, Cannon and Perry decided that happy software engineers shared one striking characteristic: They “don’t like people.” In their final report they concluded that programmers “dislike activities involving close personal interaction; they are generally more interested in things than in people.” There’s little evidence to suggest that antisocial people are more adept at math or computers. Unfortunately, there’s a wealth of evidence to suggest that if you set out to hire antisocial nerds, you’ll wind up hiring a lot more men than women.

    Cannon and Perry’s work, as well as other personality tests that seem, in retrospect, designed to favor men over women, were used in large companies for decades, helping to create the pop culture trope of the male nerd and ensuring that computers wound up in the boys’ side of the toy aisle. They influenced not just the way companies hired programmers but also who was allowed to become a programmer in the first place.

    In 1984, Apple released its iconic Super Bowl commercial showing a heroic young woman taking a sledgehammer to a depressing and dystopian world. It was a grand statement of resistance and freedom. Her image is accompanied by a voice-over intoning, “And you’ll see why 1984 won’t be like 1984.” The creation of this mythical female heroine also coincided with an exodus of women from technology. In a sense, Apple’s vision was right: The technology industry would never be like 1984 again. That year was the high point for women earning degrees in computer science, which peaked at 37 percent. As the number of overall computer science degrees picked back up during the dot-com boom, far more men than women filled those coveted seats. The percentage of women in the field would dramatically decline for the next two and a half decades.

    Despite having hired and empowered some of the most accomplished women in the industry, Google hasn’t turned out to be all that different from its peers when it comes to measures of equality—which is to say, it’s not very good at all. In July 2017 the search engine disclosed that women accounted for just 31 percent of employees, 25 percent of leadership roles, and 20 percent of technical roles. That makes Google depressingly average among tech companies.

    Even so, exactly zero of the 13 Alphabet company heads are women. To top it off, representatives from several coding education and pipeline feeder groups have told me that Google’s efforts to improve diversity appear to be more about seeking good publicity than enacting change. One noted that Facebook has been successfully poaching Google’s female engineers because of an “increasingly chauvinistic environment.”

    Last year, the personality tests that helped push women out of the technology industry in the first place were given a sort of reboot by a young Google engineer named James Damore. In a memo that was first distributed among Google employees and later leaked to the press, Damore claimed that Google’s tepid diversity efforts were in fact an overreach. He argued that “biological” reasons, rather than bias, had caused men to be more likely to be hired and promoted at Google than women.

    #Féminisme #Informatique #Histoire_numérique

  • Le Web a 30 ans. Et non, il n’était pas forcément mieux avant
    30 ans du Web : le revenge porn et les cyber-attaques ne sont pas nouveaux
    https://www.ladn.eu/tech-a-suivre/data-big-et-smart/meilleur-comme-pire-30-ans-web

    Le Web fête ses trente ans. Son anniversaire fait resurgir l’idée que le Web utopique du départ aurait glissé vers une version cauchemardesque. Cette conviction doit être relativisée. L’historienne Valérie Schafer rappelle que les conduites criminelles existent depuis les débuts, tout comme les initiatives visant à en faire un espace où règne créativité et égalité.

    Cela ne vous aura pas sans doute pas échappé, le Web fête ses trente ans aujourd’hui. Un anniversaire célébré en demi-teinte. Beaucoup de publications dénoncent ce que le Web est devenu : un espace perverti par la haine et une forme de capitalisme de surveillance. Dans une tribune Medium, l’un de ses créateurs, l’informaticien Tim Berners-Lee, estime que le Web souffre d’importants dysfonctionnements et qu’il convient de le sauver.
    Non le Web, c’était pas forcément mieux avant

    Si la haine sur les réseaux nous semble particulièrement prégnante aujourd’hui, l’idée d’un Web utopique à ses débuts devenu cauchemardesque doit être relativisée. « Le cyberharcèlement, les spams… existent déjà sur Internet, dans les mails, les forums, avant même le Web. Les premiers spams apparaissent dans les années 1970 », explique l’historienne Valérie Schafer.

    « Quand le Web se développe en France, il est accompagné au milieu des années 1990 d’une vague de procès, avec les premières plaintes de l’Union des étudiants juifs de France et de la Licra pour incitation à la haine raciale, mais aussi des affaires touchant à des contenus à caractère pédophile, ou encore la circulation d’une recette de fabrication de bombe sur Internet », raconte la spécialiste de l’histoire du numérique. France 2 consacrera même un reportage à cette recette de bombe en août 1995.
    Premier cas de revenge porn dans les 90s

    Même les premiers cas de « revenge porn » apparaissent dans années 1990. « Le Tribunal de Grande Instance de Privas se prononce en 1997 sur le cas d’un étudiant en informatique, qui a diffusé sur Internet des photographies à caractère pornographique de son ex petite amie accompagnées d’un commentaire sur "les mœurs" de celle-ci », précise Valérie Schafer.

    La violence sur les réseaux existe donc déjà depuis la création du Web. Dans une tribune Médium, Tim Berners-Lee convient lui-même qu’il sera difficile d’éradiquer ces comportements, même s’il est possible de les minimiser à l’aide de lois. Mais l’informaticien se montre sceptique vis-à-vis des projets législatifs visant à réguler les échanges sur les réseaux comme le règlement antiterroriste. Il estime, dans une interview au Monde, qu’ils pourraient conduire à la mise en place d’outils de censure massive. Et c’est l’un des paradoxes : faire d’Internet un endroit meilleur, plus apaisé, tout en évitant de trop le réguler.

    Le créateur du Web estime toutefois qu’il est possible de le « sauver » en jouant sur d’autres dysfonctionnements. Le fait que le Web repose en grande partie sur la publicité et la vente de données, notamment. Il encourage la population mondiale à se réunir autour d’un « contrat pour le Web » pour « discuter de ce dont nous avons besoin pour en faire un endroit meilleur et plus ouvert », explique-t-il au Monde. Mais peu d’actions très concrètes ressortent de son discours. Si ce n’est donner la possibilité aux gens de contrôler et de se servir de leurs données.
    Wikipedia, symbole d’un Web du partage

    La volonté de construire un cyberespace respectueux, libre et pas uniquement basé sur des échanges commerciaux n’est pas nouvelle. « Fondé sur l’ouverture, la gratuité, la participation, Wikipedia, qui démarre en 2001, incarne bien des valeurs d’un Web de l’information et du partage », évoque Valérie Schafer.

    Un certain nombre d’associations cherchent à préserver les valeurs d’Internet et du Web depuis sa création. A l’image de « l’Electronic frontier foundation créée dès les années 1990 », précise Valérie Schafer. Ou « Framasoft créée en 2001 et dont les outils proposent des alternatives à ceux des GAFAM en proposant de "Dégoogliser" Internet » et en promouvant le logiciel libre. »

    De multiples exemples montrent qu’égalité et créativité sont bien vivants sur le Web. Mais les préserver demande, selon Valérie Schafer, « une prise de conscience et des choix, de la part des internautes, des politiques en passant par les acteurs techniques et économiques. »

    #Histoire_numérique #Web #Valérie_Schafer