• Bringing new life to the Altair 8800 on Azure Sphere - Microsoft Tech Community

    Toujours fascinant le rétro-computing.

    I love embedded IoT development and when a colleague asked if I’d be interested in working on cloud-enabling an Altair 8800 emulator running on Azure Sphere then I seized the opportunity. I’m fascinated by tech and retro computing and this project combined both interests and hopefully, it will inspire you to fire up this project, learn about the past and connect to the future.

    The MITS Altair 8800 was built on the Intel 8080. The Intel 8080 was the second 8-bit microprocessor manufactured by Intel, clocked at 2MHz, with a 16-bit address bus to access a whopping 64 KB of RAM. This was back in 1974, yes, that’s 47 years ago. The Altair 8800 is considered to be the computer that sparked the PC revolution and kick-started Microsoft and Apple.

    You can learn more about the Altair at https://en.wikipedia.org/wiki/Altair_8800.

    thumbnail image 2 of blog post titled Bringing new life to the Altair 8800 on Azure Sphere

    Altair 8800 image attribution: File:Altair 8800, Smithsonian Museum.jpg - Wikimedia Commons

    The first release of the Altair 8800 was programmed by flipping switches, then came paper tape readers to load apps, monitors and keyboards, and floppy disk drive storage, revolutionary at the time. The first programming language for the machine was Altair BASIC, the program written by Bill Gates and Paul Allen, and Microsoft’s first product.

    Interest in the Altair 8800 is not new and there are several implementations of the Open-Source Altair 8800 emulator running on various platforms, and if you are keen, you can even buy an Altair 8800 clone. The Altair 8800 running on Azure Sphere builds on Open-Source projects and brings with it something unique, which is to modernize and cloud-enable the Altair. Bringing 21st century cloud technologies to a computer generation that predates the internet.

    #Histoire_numérique #Altair

  • This is how we lost control of our faces | MIT Technology Review

    The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.

    Karen Hao
    February 5, 2021

    In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

    Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

    People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

    A history of facial-recognition data

    The researchers identified four major eras of facial recognition, each driven by an increasing desire to improve the technology. The first phase, which ran until the 1990s, was largely characterized by manually intensive and computationally slow methods.

    But then, spurred by the realization that facial recognition could track and identify individuals more effectively than fingerprints, the US Department of Defense pumped $6.5 million into creating the first large-scale face data set. Over 15 photography sessions in three years, the project captured 14,126 images of 1,199 individuals. The Face Recognition Technology (FERET) database was released in 1996.

    The following decade saw an uptick in academic and commercial facial-recognition research, and many more data sets were created. The vast majority were sourced through photo shoots like FERET’s and had full participant consent. Many also included meticulous metadata, Raji says, such as the age and ethnicity of subjects, or illumination information. But these early systems struggled in real-world settings, which drove researchers to seek larger and more diverse data sets.

    In 2007, the release of the Labeled Faces in the Wild (LFW) data set opened the floodgates to data collection through web search. Researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. LFW also relaxed standards around the inclusion of minors, using photos found with search terms like “baby,” “juvenile,” and “teen” to increase diversity. This process made it possible to create significantly larger data sets in a short time, but facial recognition still faced many of the same challenges as before. This pushed researchers to seek yet more methods and data to overcome the technology’s poor performance.

    Then, in 2014, Facebook used its user photos to train a deep-learning model called DeepFace. While the company never released the data set, the system’s superhuman performance elevated deep learning to the de facto method for analyzing faces. This is when manual verification and labeling became nearly impossible as data sets grew to tens of millions of photos, says Raji. It’s also when really strange phenomena start appearing, like auto-generated labels that include offensive terminology.

    Image-generation algorithms are regurgitating the same sexist, racist ideas that exist on the internet.

    The way the data sets were used began to change around this time, too. Instead of trying to match individuals, new models began focusing more on classification. “Instead of saying, ‘Is this a photo of Karen? Yes or no,’ it turned into ‘Let’s predict Karen’s internal personality, or her ethnicity,’ and boxing people into these categories,” Raji says.

    Amba Kak, the global policy director at AI Now, who did not participate in the research, says the paper offers a stark picture of how the biometrics industry has evolved. Deep learning may have rescued the technology from some of its struggles, but “that technological advance also has come at a cost,” she says. “It’s thrown up all these issues that we now are quite familiar with: consent, extraction, IP issues, privacy.”

    Raji says her investigation into the data has made her gravely concerned about deep-learning-based facial recognition.

    “It’s so much more dangerous,” she says. “The data requirement forces you to collect incredibly sensitive information about, at minimum, tens of thousands of people. It forces you to violate their privacy. That in itself is a basis of harm. And then we’re hoarding all this information that you can’t control to build something that likely will function in ways you can’t even predict. That’s really the nature of where we’re at.”

    #Reconnaissance_faciale #éthique #Histoire_numérique #Surveillance

  • YouTube at 15: what happened to some of the platform’s biggest early stars? | Global | The Guardian

    As YouTube celebrates its 15th birthday, we talk to five early adopters about how the all-singing all-dancing platform has evolved
    by Chris Stokel-Walker
    Sun 16 Feb 2020 12.00 GMT Last modified on Fri 21 Feb 2020 18.40 GMT

    Late on the evening of 14 February 2005, Jawed Karim, Chad Hurley and Steve Chen registered the website YouTube.com. Two months later, when the first video (of Karim briefly describing the elephant enclosure at the San Diego Zoo) was uploaded, a platform was launched that has gone on to change the world.

    Today, more than 2bn of us visit YouTube monthly, and 500 hours of footage is uploaded every minute. That’s a far cry from the 18-second video that started it all. Its stars are multi-millionaires: YouTube’s highest earner in 2019 was an eight-year-old called Ryan, who netted $26m. The number of creators earning five or six figures has increased by more than 40% year on year. At first, users earned a few hundred pounds for mentioning products in their videos; now they can make hundreds of thousands, and much more through exclusive brand deals. Not many like talking about their income: it makes them less relatable.

    The scale of viewership has increased, too. It took eight years for the site to get to 1bn monthly users, and another seven to reach 2bn. There are 65% more channels with more than 1m subscribers than a year ago; the number of channels with more than 1bn views grew five times in the past three years.

    As more money and more eyeballs have entered the frame, the level of competition has increased. What was once a site for hobbyists has turned into a mini-Hollywood, with huge teams of staff churning out content for demanding fans.

    As the website celebrates its 15-year anniversary, five of its significant early stars explain how their relationship with the site has evolved.

    #YouTube #Histoire_numérique

  • Windows turns 35: a visual history - The Verge

    The PC revolution started off life 35 years ago this week. Microsoft launched its first version of Windows on November 20th, 1985, to succeed MS-DOS. It was a huge milestone that paved the way for the modern versions of Windows we use today. While Windows 10 doesn’t look anything like Windows 1.0, it still has many of its original fundamentals like scroll bars, drop-down menus, icons, dialog boxes, and apps like Notepad and MS paint.

    Windows 1.0 also set the stage for the mouse. If you used MS-DOS then you could only type in commands, but with Windows 1.0 you picked up a mouse and moved windows around by pointing and clicking. Alongside the original Macintosh, the mouse completely changed the way consumers interacted with computers. At the time, many complained that Windows 1.0 focused far too much on mouse interaction instead of keyboard commands. Microsoft’s first version of Windows might not have been well received, but it kick-started a battle between Apple, IBM, and Microsoft to provide computing to the masses.

    #Histoire_numérique #Windows

  • YouTube started as an online dating site - CNET

    Long before Tinder made swiping a thing for matchmaking apps, there was a little-known video site trying to play cupid to the Internet generation: YouTube.

    That’s not exactly what comes to mind when you think of the world’s largest video site, which welcomes a billion visitors a month. But that’s how the YouTube, which Google bought in 2006 for $1.6 billion, got its start, said co-founder Steve Chen.

    “We always thought there was something with video there, but what would be the actual practical application?” Chen said Monday at the South by Southwest tech, film and music conference in Austin, Texas. “We thought dating would be the obvious choice.”

    The idea was for single people to make videos introducing themselves and saying what they were looking for, said Chen. After five days no one had uploaded a single video, so he and the other co-founders, Chad Hurley and Jawed Karim, reconsidered.

    #YouTube #Histoire_numérique #Dating

  • 29 Years Ago Today, The First Web Page Went Live. This Is What It Looked Like | IFLScience

    In the 1980s, Tim Berners-Lee became frustrated with how information was shared and stored at the European Organization for Nuclear Research (CERN).

    He noticed that, as well as being distributed inefficiently, information was being lost at the organization, largely due to a high turnover of staff. Technical details of old projects could sometimes be lost forever, or else had to be recovered through lengthy investigations in the event of an emergency. Different divisions of CERN used software written in a variety of programming languages, on different operating systems, making the transfer of knowledge cumbersome and time-consuming.

    In response to these annoyances, he made a suggestion in 1989 that would go on to change the world, titled with some lackluster: Information Management: A Proposal. It described a system where all the different divisions of CERN could publish their own part of the experiment, and everyone else could access it. The system would use hypertext to allow people to publish and read the information on any kind of computer. This was the beginning of the World Wide Web.

    The first web page went live 29 years ago today, on August 6, 1991. As such you’ve probably seen people online today linking to the “first-ever” web page.

    29 years ago today, Tim Berners-Lee posted the first ever web page.

    This is ithttps://t.co/MSxAZ2cMZK
    — Russ (@RussInCheshire) August 5, 2020

    If you click the link, this is what you will be greeted with. You’ll probably be instantly confused by the date, as well as the lack of memes and people being incredibly aggressive in the comment section.
    CERN archive.

    While it gives you an idea of what the first web page looked like, we may never know what the actual web page displayed on that day in August, 1991. There are no screenshots, instead what you are seeing is the earliest record we have of that first web page taken in 1992. While we know that when the World Wide Web first launched it contained an explanation of the project itself, hypertext and how to create web pages, the first page of the system designed to prevent the loss of information has ironically been lost, perhaps forever.

    Though in retrospect what Berners-Lee had invented was world-changing, at the time its creators were too pre-occupied with trying to convince their colleagues to realize its value and adopt it to think about archiving their invention for future historians to gawp at.

    “I mean the team at the time didn’t know how special this was, so they didn’t think to keep copies, right?” Dan Noyes, who ran the much larger CERN website in 2013 told NPR. He believes the first incarnation of the world’s first web page is still out there somewhere, probably on a floppy disk or hard drive hanging around in somebody’s house.

    That was how the 1992 version was found.

    “I took a copy of the entire website in a floppy disk on my machine so that I could demonstrate it locally just to show people what it was like. And I ended up keeping a copy of that floppy disk,” Tim Berners-Lee told NPR.

    Unfortunately, despite CERN’s best efforts, the first page itself has not been found. It may never be.

    #Histoire_numérique #Web #Première_page #Tim_Berners_Lee

  • William English, Who Helped Build the Computer Mouse, Dies at 91 - The New York Times

    William English, the engineer and researcher who helped build the first computer mouse and, in 1968, orchestrated an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today, died on July 26 in San Rafael, Calif. He was 91.

    His death, at a medical facility, was confirmed by his wife, Roberta English, who said the cause was respiratory failure.

    In the late 1950s, after leaving a career in the Navy, Mr. English joined a Northern California research lab called the Stanford Research Institute, or S.R.I. (now known as SRI International). There he met Douglas Engelbart, a fellow engineer who hoped to build a new kind of computer.

    At a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and printouts, Mr. Engelbart envisioned a machine that anyone could use simply by manipulating images on a screen. It was a concept that would come to define the information age, but by his own admission Mr. Engelbart had struggled to explain his vision to others.
    ImageAt a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and print-outs,
    At a time when only specialists used computers, entering and retrieving information through punched cards, typewriters and print-outs,Credit...via English family

    Mr. English, known to everyone as Bill, was one of the few who understood these ideas and who had the engineering talent, patience and social skills needed to realize them. “He was the guy who made everything happen,” said Bill Duvall, who worked alongside Mr. English during those years. “If you told him something needed to be done, he figured out how to do it.”

    After Mr. Engelbart had envisaged the computer mouse and drawn a rough sketch of it on a notepad, Mr. English built it in the mid-1960s. Housed inside a small pinewood case, the device consisted of two electrical mechanisms, called potentiometers, that tracked the movement of two small wheels as they moved across a desktop. They called it a mouse because of the way the computer’s on-screen cursor, called a CAT, seemed to chase the device’s path.

    As they were developing the system, both Mr. English and Mr. Engelbart were part of the government-funded L.S.D. tests conducted by a nearby lab called the International Foundation of Advanced Study. Both took the psychedelic as part of a sweeping effort to determine whether it could “open the mind” and foster creativity.

    Though Mr. Engelbart oversaw the NLS project, the 1968 demonstration in San Francisco was led by Mr. English, who brought both engineering and theater skills to the task. In the mid-1950s he had volunteered as a stage manager for a Bay Area theater troupe called The Actor’s Workshop.

    For the San Francisco event, he used a video projector the size of a Volkswagen Beetle (borrowed it from a nearby NASA lab) to arrange and project the live images behind Mr. Engelbart as he demonstrated NLS from the stage. He had been able to set up the wireless link that sent video between the Menlo Park computer lab and the auditorium after befriending a telephone company technician.
    Mr. English helped orchestrate an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today.
    Mr. English helped orchestrate an elaborate demonstration of the technology that foretold the computers, tablets and smartphones of today.Credit...via English family

    Three years after the demonstration, Mr. English left S.R.I. and joined a new Xerox lab called the Palo Alto Research Center, or PARC. There he helped adapt many of the NLS ideas for a new machine called the Alto, which became a template for the Apple Macintosh, the first Microsoft Windows personal computers and other internet-connected devices.

    #Histoire_numérique #Souris #Bill_English #SRI_international #Xerox_park #Mother_of_all_demo

  • 25 years of PHP: The personal web tools that ended up everywhere • The Register

    Feature On 8th June 1995 programmer Rasmus Lerdorf announced the birth of “Personal Home Page Tools (PHP Tools)”.

    The PHP system evolved into one that now drives nearly 80 per cent of websites using server-side programming, according to figures from w3techs.

    Well-known sites running PHP include every Wordpress site (WordPress claims to run “35 per cent of the web”), Wikipedia and Facebook (with caveats - Facebook uses a number of languages including its own JIT-compiled version of PHP called HHVM). PHP is also beloved by hosting companies, many of whom provide their customers with PHPMyAdmin for administering MySQL databases.

    Lerdorf was born in Greenland and grew up in Denmark and Canada. He worked at Yahoo! (a big PHP user) and Etsy. He developed PHP for his own use. “In 1993 programming the web sucked,” he said in a 2017 talk.

    It was CGI written in C and “you had to change the C code and recompile even for a slight change,” he said.

    Perl was “slightly better”, Lerdorf opined, but “you still had to write Perl code to spit out HTML. I didn’t like that at all. I wanted a simple templating language that was built into the web server.”

    The Danish-Canadian programmer’s original idea was that developers still wrote the bulk of their web application in C but “just use PHP as the templating language.” However nobody wanted to write C, said Lerdorf, and people “wanted to do everything in the stupid little templating language I had written, all their business logic.”

    Lerdorf described a kind of battle with early web developers as PHP evolved, with the developers asking for more and more features while he tried to point them towards other languages for what they wanted to do.

    “This is how we got PHP,” he said, “a templating language with business logic features pushed into it.”
    The web’s workhorse

    Such is the penetration of PHP, which Lerdorf said drives around 2 billion sites on 10 million physical machines, that improving efficiency in PHP 7 had a significant impact on global energy consumption. Converting the world from PHP 5.0 to PHP 7 would save 15B Kilowatt hours annually and 7.5B KG less carbon dioxide emissions he said – forgetting perhaps that any unused cycles would soon be taken up by machine learning and AI algorithms.

    PHP is the workhorse of the web but not fashionable. The language is easy to use but its dynamic and forgiving nature makes it accessible to developers of every level of skill, so that there is plenty of spaghetti code out there, quick hacks that evolved into bigger projects. In particular, early PHP code was prone to SQL injection bugs as developers stuffed input from web forms directly into SQL statements, or other bugs and vulnerabilities thanks to a feature called register_globals that was on by default and which will “inject your scripts with all sorts of variables,” according to its own documentation.

    There was originally no formal specification for PHP and it is still described as work in progress. It is not a compiled language and object orientation was bolted on rather than being designed in from the beginning as in Java or C# or Ruby. Obsolete versions of PHP are commonplace all over the internet on the basis that as long as it works, nobody touches it. It has become the language that everyone uses but nobody talks about.

    “PHP is not very exciting and there is not much to it,” said Lerdorf in 2002.

    Regularly coming fourth as the most popular language in the Redmonk rankings, PHP has rarely got a mention in the analysis.

    That said, PHP has strong qualities that account for its popularity and longevity. Some of this is to do with Lerdorf himself, who has continued to steer PHP with wisdom and pragmatism, though it is a community project with thousands of contributors. It lacks corporate baggage and has always been free and open source. “The tools are in the public domain distributed under the GNU Public License. Yes, that means they are free!” said Lerdorf in the first announcement.

    The documentation site is a successful blend of reference and user contributions which means you almost always find something useful there, which is unusual. Most important, PHP is reliable and lightweight, which means it performs well in real-world use even if it does not win in benchmarks.

    25 years is a good run and it is not done yet. ®

    #Histoire_numérique #Languages_informatiques #PHP

  • How Bill Gates described the internet ’tidal wave’ in 1995

    As a pioneer of the personal computer revolution, Microsoft co-founder Bill Gates spent decades working toward his goal of putting “a computer on every desk and in every home.”

    So it’s no surprise that, by the mid-1990s, Gates was among the earliest tech CEOs to recognize the vast promise of the internet for reaching that goal and growing his business.

    Exactly 25 years ago today, on May 26, 1995, Gates wrote an internal memo to Microsoft’s executive staff and his direct reports to extol the benefits of expanding the company’s internet presence. Gates, who was still Microsoft’s CEO at that point, titled his memo, simply: “The Internet Tidal Wave.”

    “The Internet is a tidal wave. It changes the rules. It is an incredible opportunity as well as [an] incredible challenge,” Gates wrote in the memo.

    The point of the memo was that the internet was fast becoming a ubiquitous force that was already changing the way people and businesses communicated with each other on a daily basis.

    “I have gone through several stages of increasing my views of [the internet’s] importance,” Gates told Microsoft’s executive team in the memo, which WIRED magazine re-printed in full in 2010. “Now I assign the internet the highest level of importance.”

    Gates goes on to pinpoint his foremost goal for the memo: “I want to make clear that our focus on the Internet is crucial to every part of our business.”
    Watch 33-year old Bill Gates explain his hiring process

    In the memo, Gates explained a bit about how he saw the internet being used in 1995, with both businesses and individuals publishing an increasing amount of content online, from personal websites to audio and video files.

    “Most important is that the Internet has bootstrapped itself as a place to publish content,” Gates wrote. “It has enough users that it is benefiting from the positive feedback loop of the more users it gets, the more content it gets, and the more content it gets, the more users it gets.”

    Gates saw that as one area where Microsoft would have to seize the available opportunities to serve its software customers. He notes that audio and video content could already be shared online in 1995, including in real-time with phone calls placed over the web and even early examples of online video-conferencing.

    While that technology provided exciting opportunities, Gates says, the audio and video quality of those products at the time was relatively poor. “Even at low resolution it is quite jerky,” he wrote of the video quality at that point, adding that he expected the technology to improve eventually “because the internet will get faster.”

    (And, he was certainly correct there, as video-conferencing software has been in increasingly high demand in recent years and is widely in use now by the millions of American workers currently working remotely due to coronavirus restrictions.)

    Gates writes that improving the internet infrastructure to offer higher quality audio and video content online would be essential to unlocking the promise of the internet. While Microsoft’s Office Suite and Windows software were already popular with computer users, Gates argued that they would need to be optimized for use online in order “to make sure you get your data as fast as you need it.”

    “Only with this improvement and an incredible amount of additional bandwidth and local connections will the internet infrastructure deliver all of the promises of the full blown Information Highway,” Gates wrote before adding, hopefully: “However, it is in the process of happening and all we can do is get involved and take advantage.”

    The then-CEO of Microsoft also pushed the need to beef up Microsoft’s own website, where he said customers and business clients should have access to a wealth of information about the company and its products.

    “Today, it’s quite random what is on the home page and the quality of information is very low,” Gates wrote in the 1995 memo. “If you look up speeches by me all you find are a few speeches over a year old. I believe the Internet will become our most important promotional vehicle and paying people to include links to our home pages will be a worthwhile way to spend advertising dollars.”

    Gates told his employees that Microsoft needed to “make sure that great information is available” on the company’s website, including using screenshots to show examples of the company’s software in action.

    “I think a measurable part of our ad budget should focus on the Internet,” he wrote. “Any information we create — white papers, data sheets, etc., should all be done on our Internet server.”

    After all, Gates argued, the internet offered Microsoft a great opportunity to communicate directly with the company’s customers and clients.

    “We have an opportunity to do a lot more with our resources. Information will be disseminated efficiently between us and our customers with less chance that the press miscommunicates our plans. Customers will come to our ‘home page’ in unbelievable numbers and find out everything we want them to know.”

    Of course, in 1995, it wasn’t just Gates’ fellow executives at Microsoft who needed convincing that the internet was the future. In November of that year, Gates went on CBS’s “Late Show with David Letterman” to promote his book “The Road Ahead” and Microsoft’s then-newly launched Internet Explorer, the company’s first web browser.

    Gates touted the possibilities of the World Wide Web in his interview with Letterman, calling the internet “a place where people can publish information. They can have their own homepage, companies are there, the latest information.”

    The comedian wasn’t particularly impressed. “I heard you could watch a live baseball game on the internet and I was like, does radio ring a bell?” Letterman joked.

    That same year, Gates gave an interview with GQ magazine’s UK edition in which he predicted that, within a decade, people would regularly watch movies, television shows and other entertainment online. In fact, 10 years later, in 2005, YouTube was founded, followed two years later by Netflix.

    However, Gates missed the mark when the interviewer suggested that the internet could also become rife with misinformation that could more easily spread to large groups of impressionable people. The Microsoft co-founder was dubious that the internet would become a repository for what might now be described as “fake news,” arguing that having more opportunities to verify information by authorities, such as experts or journalists, would balance out the spread of misinformation.

    #Histoire_numérique #Bill_Gates #Internet

  • Steven Levy : Streaming celebrates its 25th birthday. Here’s how it all began

    So it’s a good time to say happy birthday to streaming media, which just celebrated its 25th anniversary. Two and a half decades ago, a company called Progressive Networks (later called Real Networks) began using the internet to broadcast live and on-demand audio.

    I spoke with its CEO, Rob Glaser, this week about the origins of streaming internet media. Glaser, with whom I have become friendly over the years, told me that he began pursuing the idea after attending a board meeting for a new organization called the Electronic Frontier Foundation in 1993. During the gathering, he saw an early version of Mosaic, the first web browser truly capable of handling images. “A light bulb went off,” Glaser says. “What if it could do the same for audio and video? Anybody could be a broadcaster, and anybody could hear it from anywhere in the world, anytime they wanted to.”

    Glaser believed it was time for a commercial service. When he launched his on April 25, 1995, the first customers were ABC News and NPR; you could listen to news headlines or Morning Edition. It wasn’t the user-friendliest—you had to download his Real Audio app to your desktop and then hope it made a successful connection to the browser. At that point, it worked only on demand. But in September 1995, Progressive Networks began live streaming. Its first real-time broadcast was the audio of a major league baseball game—the Seattle Mariners versus the New York Yankees. (The Mariners won.The losing pitcher was Mariano Rivera, then a starter.) The few who listened from the beginning had to reboot around the seventh inning, as the buffers filled up after two and a half hours or so. By the end of that year, thousands of developers were using Real.

    Other companies began streaming video before Glaser’s, which introduced RealVideo in 1997. The internet at that point wasn’t robust enough to handle high-quality video, but those in the know understood that it was just a matter of time. “It was clear to me that this was going to be the way that everything is going to be delivered,” says Glaser, who gave a speech around then titled “The Internet as the Next Mass Medium.” That same year, Glaser had a conversation with an entrepreneur named Reed Hastings, who told him of his long-range plan to build a business by shipping physical DVDs to people, and then shift to streaming when the infrastructure could support it. That worked out well. Today, our strong internet supports not only entertainment but social programming from YouTube, Facebook, TikTok and others.

    #Histoire_numérique #Streaming

  • The History of the URL

    On the 11th of January 1982 twenty-two computer scientists met to discuss an issue with ‘computer mail’ (now known as email). Attendees included the guy who would create Sun Microsystems, the guy who made Zork, the NTP guy, and the guy who convinced the government to pay for Unix. The problem was simple: there were 455 hosts on the ARPANET and the situation was getting out of control.

    Plus encore que l’histoire des URL, cet article présente tous les divers choix qui ont jalonné l’évolution de l’internet concernant le nommage (des emails, des pages web ou des serveurs).

    The Web Application

    In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.

    #Nommage #URL #Histoire_numérique

  • Fred Turner, Aux sources de l’utopie numérique. De la contre-culture à la cyberculture, Stewart Brand un homme d’influence

    Fred Turner revisite l’histoire des origines intellectuelles et sociales de l’internet en suivant le parcours de Stewart Brand, un « entrepreneur réticulaire » (p. 41). L’ouvrage s’ouvre sur une interrogation : comment se fait-il que le mot révolution soit sur toutes les bouches à l’évocation des technologies numériques alors qu’elles étaient le symbole d’un système inhumain qui a mis le monde au bord de l’apocalypse nucléaire ? Pour y répondre, l’auteur s’attache à retracer les origines de l’utopie numérique dans la trajectoire de Stewart Brand, au croisement des mondes sociaux, des idéologies et des objets technologiques.

    #Fred_Turner #Utopie_numérique #Histoire_numérique

  • George Laurer, co-inventor of the barcode, dies at 94 - BBC News

    George Laurer, the US engineer who helped develop the barcode, has died at the age of 94.

    Barcodes, which are made up of black stripes of varying thickness and a 12-digit number, help identify products and transformed the world of retail.

    They are now found on products all over the world.

    The idea was pioneered by a fellow IBM employee, but it was not until Laurer developed a scanner that could read codes digitally that it took off.

    Laurer died last Thursday at his home in Wendell, North Carolina, and his funeral was held on Monday.

    How the barcode changed retailing
    Inventor of barcode dies aged 91
    Are barcodes the way to protect dementia patients?

    It was while working as an electrical engineer with IBM that George Laurer fully developed the Universal Product Code (UPC), or barcode.

    He developed a scanner that could read codes digitally. He also used stripes rather than circles that were not practical to print.
    Media captionAaron Heslehurst explains how the barcode became a million dollar idea

    The UPC went on to revolutionise “virtually every industry in the world”, IBM said in a tribute on its website.

    In the early 1970s, grocery shops faced mounting costs and the labour-intensive need to put price tags on everything.

    The UPC system used lasers and computers to quickly process items via scanning. This meant fewer pricing errors and easier accounting.

    The first product scanned, in Ohio in June 1974, was a packet of Wrigley’s Juicy Fruit chewing gum. It is now on display at the Smithsonian National Museum of American History in Washington.

    Fellow IBM employee, Norman Woodland, who died in 2012, is considered the pioneer of the barcode idea, which he initially based on Morse code.

    Although he patented the concept in the 1950s, he was unable to develop it. It would take a few more years for Laurer to bring the idea to fruition with the help of low-cost laser and computing technology.

    #Code_a_barre #Histoire_numérique

  • The Earliest Unix Code: An Anniversary Source Code Release - CHM

    2019 marks the 50th anniversary of the start of Unix. In the summer of 1969, that same summer that saw humankind’s first steps on the surface of the Moon, computer scientists at the Bell Telephone Laboratories—most centrally Ken Thompson and Dennis Ritchie—began the construction of a new operating system, using a then-aging DEC PDP-7 computer at the labs. As Ritchie would later explain:

    “What we wanted to preserve was not just a good environment to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied from remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”1

    #Histoire_numérique #UNIX

  • Jeu de 7 familles de l’informatique - Interstices

    L’informatique est une science diverse ! Ce jeu est l’occasion de présenter des figures importantes qui ont travaillé et travaillent à façonner la discipline et à la faire évoluer au cours du temps.

    Le format « jeu de 7 familles » permet de mettre en lumière 42 (+1) personnalités, et de montrer que l’Histoire de l’informatique ne se résume pas à celle des ordinateurs. L’informatique se développe à travers une communauté scientifique qui fait vivre la discipline et interagit avec d’autres. Nous proposons dans ce jeu de découvrir 7 de ses grandes thématiques.

    La première famille s’intéresse à « Algorithmes & programmation », concepts à la base de la pensée informatique. La seconde famille présente le lien avec les mathématiques, « Mathématiques & informatique » étant des disciplines intrinsèquement liées. L’informatique, c’est aussi la protection de l’information avec la troisième famille « Sécurité & confidentialité », ainsi que la construction de l’information et sa diffusion, représentées par la quatrième famille « Systèmes & réseaux ». L’informatique n’existe pas sans les machines qui réalisent les calculs, machines que l’on retrouve dans la cinquième famille « Machines & Composants ». Par ailleurs, développer des solutions automatiques en s’inspirant de l’humain, c’est le défi relevé par la sixième famille « Intelligence Artificielle ». Enfin, pour faire de l’informatique, des interfaces facilitant l’accès sont nécessaires, c’est ce qui intéresse la septième famille « Interaction Homme-Machine ».

    Ce jeu a la particularité de posséder un Joker en la personne d’Alan Turing (qui de par la pluralité de ses recherches a sa place dans toutes les familles).

    Dans chaque famille, vous découvrirez six personnalités, classées (en représentation binaire, voir ci-dessous) par ordre croissant d’année de naissance. Nous avons inclus chacune d’elles pour de multiples raisons. Des grands anciens précurseurs, des fondateurs de la discipline, des scientifiques en activité, en passant par les incontournables, nous proposons une vision forcément parcellaire. Mais si vous avez envie de découvrir la science informatique et pourquoi pas rejoindre ses différents acteurs, le pari sera gagné.

    Les personnalités ont été sélectionnées sur la base de plusieurs critères : créer un équilibre entre la représentation des hommes et des femmes, en essayant de ne pas limiter notre panorama à la seule histoire occidentale même si les Français sont bien présents, tout en cherchant à donner une perspective historique de la dynamique scientifique. En effectuant nos choix, nous avons croisé bien d’autres figures importantes souvent méconnues. 7 familles ne sont pas suffisantes pour couvrir toutes les problématiques de l’informatique. Pour cela, il faudrait au moins une extension du jeu avec de nouvelles familles !

    Nous vous invitons à vous amuser avec le jeu et à découvrir toutes les personnalités (ainsi que les objets qui les accompagnent, comme illustration de leurs travaux).

    #Histoire_numérique #Jeu

  • Britain’s £50 Note Will Honor Computing Pioneer Alan Turing - The New York Times

    LONDON — Alan Turing, the computing pioneer who became one of the most influential code breakers of World War II, has been chosen by the Bank of England to be the new face of its 50-pound note.

    The decision to put Mr. Turing on the highest-denomination English bank note, worth about $62, adds to growing public recognition of his achievements. His reputation during his lifetime was overshadowed by a conviction under Britain’s Victorian laws against homosexuality, and his war work remained a secret until decades later.

    “Alan Turing was an outstanding mathematician whose work has had an enormous impact on how we live today,” Mark Carney, the governor of the Bank of England, said in a statement. “As the father of computer science and artificial intelligence, as well as a war hero, Alan Turing’s contributions were far-ranging and path breaking.”

    “Turing is a giant on whose shoulders so many now stand,” Mr. Carney added.

    The central bank announced last year that it wanted to honor someone in the field of science on the next version of the bill, which was last redesigned in 2011, and Mr. Turing was chosen from a list of 227,299 nominees that included Charles Babbage, Stephen Hawking, Ada Lovelace and Margaret Thatcher (who worked as a chemical researcher before entering politics).

    “The strength of the shortlist is testament to the U.K.’s incredible scientific contribution,” Sarah John, the Bank of England’s chief cashier, said in a statement.

    The bank plans to put the new note into circulation by the end of 2021.

    Bank of England bills feature Queen Elizabeth’s face on one side, and a notable figure from British history on the other. Scientists previously honored in this way include Newton, Darwin and the electrical pioneer Michael Faraday. The current £50 features James Watt, a key figure in the development of the steam engine, and Matthew Boulton, the industrialist who backed him.

    Mr. Turing’s work provided the theoretical basis for modern computers, and for ideas of artificial intelligence. His work on code-breaking machines during World War II also drove forward the development of computing, and is regarded as having significantly affected the course of the war.

    Mr. Turing died in 1954, two years after being convicted under Victorian laws against homosexuality and forced to endure chemical castration. The British government apologized for his treatment in 2009, and Queen Elizabeth granted him a royal pardon in 2013.

    The note will feature a photograph of Mr. Turing from 1951 that is now on display at the National Portrait Gallery. His work will also be celebrated on the reverse side, which will include a table and mathematical formulas from a paper by Mr. Turing from 1936 that is recognized as foundational for computer science.

    #Histoire_numérique #Alan_Turing

  • Jean-Marie Hullot, informaticien visionnaire, technologiste exceptionnel | binaire

    Jean-Marie Hullot fut un très grand professionnel de l’informatique. Outre les apports scientifiques du début de sa carrière de chercheur IRIA détaillés plus loin, peu de personnes ont eu des impacts aussi forts et permanents sur l’informatique de Monsieur Tout-le-monde. On lui doit directement les interfaces et interactions graphiques et tactiles modernes, développés d’abord à L’IRIA, puis chez NeXT computers, dont la superbe machine est restée dans les mémoires et a servi en particulier à Tim Berners-Lee pour créer le World Wide Web, et enfin chez Apple à travers le Macintosh et son système MacOSX puis l’iPhone, véritables révolutions dans le domaine qui ont largement engendré le développement de l’informatique conviviale à grande échelle que nous connaissons maintenant, avec en particulier la révolution des smartphones.

    Ces interfaces particulièrement élégantes et intuitives ont marqué une nette rupture avec tout ce qui s’était fait avant, et qu’on a d’ailleurs largement oublié. Il faut bien comprendre qu’elles résultent de la conjonction d’un goût esthétique très sûr et de la création et de la maîtrise de nouvelles architectures de programmation subtiles et éminemment scientifiques, que Jean-Marie Hullot avait commencé à développer lorsqu’il était chercheur à l’IRIA. Un autre apport majeur a été celui des mécanismes de synchronisations d’appareils divers, ici Macs, iPhones et iPads, pour que les calendriers, listes de choses à faire ou autres soient automatiquement à jour dès qu’on les modifie sur un des appareils, sans besoin de la moindre transformation et quels que soient les réseaux utilisés. Cette transparence maintenant habituelle était difficile à réaliser et inconnue ailleurs. Il faut rappeler que le domaine concerné de l’IHM locale et synchronisée est profond et difficile, et les réussites de ce niveau y sont fort rares. Celle de Jean-Marie Hullot chez NeXT puis Apple, particulièrement brillante, a aussi demandé de très nombreuses interactions avec des designers et surtout directement avec Steve Jobs, dont l’exigence de qualité était légendaire.

    Mais, avant sa carrière industrielle, Jean-Marie Hullot a fait bien d’autres apports scientifiques de premier plan. Après l’École normale supérieure de Saint-Cloud, il s’est vite passionné pour la programmation, particulièrement en LISP. Cela s’est passé à l’IRCAM où se trouvait alors le seul ordinateur en France vraiment adapté à la recherche en informatique, le PDP-10 exigé par Pierre Boulez pour monter cet institut. S’y trouvaient en particulier Patrick Greussay, auteur de VLISP et fondateur de l’école française de LISP, et Jérôme Chailloux, auteur principal du système Le_Lisp qui a longtemps dominé la scène française de l’Intelligence Artificielle et auquel Hullot a beaucoup participé et apporté.

    Avec sa rencontre avec Gérard Huet, dont il suivait le cours de DEA à Orsay, il rejoint l’IRIA à Rocquencourt pour son travail doctoral. Il débuta sa recherche en réécriture de termes, problématique issue de la logique mathématique et de l’algèbre universelle, et par suite essentielle aux fondements mathématiques de l’informatique. Parti de l’algorithme de complétion décrit dans l’article séminal de Knuth et Bendix, il réalisa un système complet de complétion de théories algébriques, incluant les dernières avancées en traitement des opérateurs commutatifs et associatifs, permettant la transition avec le calcul des bases polynomiales de Gröbner. Le logiciel KB issu de son travail de thèse avait une algorithmique particulièrement soignée, permettant d’expérimenter avec des axiomatisations non triviales, comme par exemple la modélisation canonique des déplacements du robot de l’Université d’Edimbourg. La renommée de ce logiciel lui valut une invitation d’un an comme chercheur invité au Stanford Research Institute en 1980-1981. Là, en tandem avec Gérard Huet, il développa les fondements de la théorie de la réécriture algébrique, alors en balbutiement. Son article Canonical forms and unification, présenté à l’International Conference on Automated Deduction en 1980, présente un résultat fondamental sur la surréduction qui permit d’établir le théorème de complétude de la procédure de narrowing (Term Rewriting Systems, Cambridge University Press 2003, p. 297.)

    Sa thèse de Doctorat à l’Université Paris XI-Orsay Compilation de formes canoniques dans les théories équationnelles fut soutenue le 14 novembre 1980. Point d’orgue de son travail en algèbre effective, elle devint la bible des chercheurs en réécriture, désormais domaine essentiel de l’informatique fondamentale. Elle fut le premier document technique français composé avec le système de composition TeX, alors en développement par Don Knuth à Stanford, où Jean-Marie Hullot s’y était initié. Il était frappé par l’étonnante qualité graphique des documents traités par TeX, mais aussi des écrans bitmap alors développés au laboratoire PARC de Xerox.

    En 1981 il retrouve l’INRIA à Rocquencourt où démarrait le Projet National Sycomore dirigé par Jean Vuillemin, et que venait de rejoindre Jérôme Chailloux, concepteur du langage Le_Lisp. Il y découvrit le premier Macintosh, ordinateur commercial pionnier profitant des avancées de PARC (bitmap display, interface de fenêtres, ethernet) et du SRI (souris). Mais il a vite trouvé la façon dont ses interfaces étaient programmées assez infernale. Comme c’était l’époque de la naissance des langages objets, il a d’abord décidé de développer le sien au-dessus de Le_Lisp, nommé Ceyx, en privilégiant les aspects dynamiques non présents dans les autres langages de l’époque (il est ensuite passé à Objective C, langage du même type mais bien plus efficace.) Ce langage remarquable, dont l’implémentation était un bijou de simplicité et d’intelligence, a servi notamment à Gérard Berry pour écrire son premier compilateur Esterel.

    Ce travail a débouché sur la création du premier générateur d’interfaces mêlant conception graphique directe et programmation simple, SOS Interfaces. C’est en présentant ce système aux idées très originales dans un séminaire à l’université Stanford qu’il a rencontré Steve Jobs, alors chassé d’Apple, et qui a immédiatement souhaité l’embaucher pour créer sa nouvelle machine NeXT. Même si cette machine n’a pas été un succès commercial, elle reste connue comme probablement la plus élégante jamais fabriquée, et a eu le rôle de précurseur de tout ce qui s’est passé ensuite.

    Jean-Marie Hullot a ensuite pris le leadership des interfaces et interactions du nouveau Macintosh en tant que directeur technique du service des applications d’Apple. Ses créations et celles de son équipe marquent toujours l’informatique moderne. Il a ensuite quitté un moment Apple et la Californie pour s’installer à Paris. Là, Steve Jobs l’a rappelé pour régénérer l’esprit créatif d’Apple, mais il a refusé de revenir en Californie, et proposé plutôt de créer un téléphone, ou plutôt un smartphone comme on dit maintenant. Après quelques difficultés pour convaincre Steve Jobs qui n’y croyait pas trop, il a créé l’iPhone dans un laboratoire secret d’une vingtaine de personnes à Paris. La suite est connue, et assez différente de ce que disait Steve Ballmer lors de la première démonstration par Steve Jobs : « Cet objet n’a aucun avenir industriel » ! Avec plus d’un milliard d’exemplaires vendus, il s’agit probablement d’un des plus grands succès esthétiques et industriels de l’histoire.

    En outre, il mena plusieurs entreprises technologiques en France. La société RealNames qu’il a créé en 1996 avait pour objet de doter le réseau Internet alors en plein essor, mais anarchique au niveau du nommage, d’un espace de nommage standardisé. Plus tard, il chercha à créer une infrastructure ouverte pour le partage de photographies, en suivant le modèle de l’encyclopédie libre Wikipedia , et créa la société Photopedia à cet effet. Ces entreprises n’ont pas été pérennes, mais elles ont permis à de nombreux jeunes professionnels de se former aux technologies de pointe, et d’essaimer à leur tour de nouvelles entreprises technologiques.

    Mathématicien créatif, informaticien visionnaire, programmeur élégant, ingénieur rigoureux, technologiste hors-pair, esthète raffiné, Jean-Marie Hullot aura marqué son époque. Les résultats de son travail ont tout simplement changé le monde à tout jamais. La Fondation Iris, qu’il a créé avec sa compagne Françoise et dont l’objectif est de sauvegarder la fragile beauté du monde, continue de porter son message humaniste : http://fondationiris.org.

    Gérard Berry et Gérard Huet

    #Histoire_numérique #IHM #iPhone #Interface #Synchronisation

  • The most expensive hyphen in history

    Bugs, bugs bugs

    By Charles Fishman4 minute Read

    This is the 18th in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

    In the dark on Sunday morning, July 22, 1962, NASA launched the first-ever U.S. interplanetary space probe: Mariner 1, headed for Venus, Earth’s neighbor closer to the Sun.

    Mariner 1 was launched atop a 103-foot-tall Atlas-Agena rocket at 5:21 a.m. EDT. For 3 minutes and 32 seconds, it rose perfectly, accelerating to the edge of space, nearly 100 miles up.

    But at that moment, Mariner 1 started to veer in odd, unplanned ways, first aiming northwest, then pointing nose down. The rocket was out of control and headed for the shipping lanes of the North Atlantic. Four minutes and 50 seconds into flight, a range safety officer at Cape Canaveral—in an effort to prevent the rocket from hitting people or land—flipped two switches, and explosives in the Atlas blew the rocket apart in a spectacular cascade of fireworks visible back in Florida.

    The Mariner 1 probe itself was blown free of the debris, and its radio transponder continued to ping flight control for another 67 seconds, until it hit the Atlantic Ocean.

    This was the third failed probe in 1962 alone; NASA had also launched two failed probes to the Moon. But the disappointment was softened by the fact that a second, identical Mariner spacecraft (along with an identical Atlas-Agena rocket) were already in hangers at the Cape, standing by. Mariner 2 was launched successfully a month later and reached Venus on December 14, 1962, where it discovered that the temperature was 797º F and that the planet rotated in the opposite direction of Earth and Mars. The Sun on Venus rises in the West.

    It was possible to launch Mariner 1’s twin just 36 days after the disaster because it took scientists at NASA’s Jet Propulsion Laboratory only five days to figure out what had gone wrong. In handwritten computer coding instructions, in dozens and dozens of lines of flight guidance equations, a single letter had been written incorrectly, probably forgetfully.

    In a critical spot, the equations contained an “R” symbol (for “radius”). The “R” was supposed to have a bar over it, indicating a “smoothing” function; the line told the guidance computer to average the data it was receiving and to ignore what was likely to be spurious data. But as written and then coded onto punch cards and into the guidance computer, the “R” didn’t have a bar over it. The “R-bar” became simply “R.”

    As it happened, on launch, Mariner 1 briefly lost guidance-lock with the ground, which was not uncommon. The rocket was supposed to follow its course until guidance-lock was re-achieved, unless it received instructions from the ground computer. But without the R-bar, the ground computer got confused about Mariner 1’s performance, thought it was off course, and started sending signals to the rocket to “correct” its course, instructions that weren’t necessary—and weren’t correct.

    Therefore “phantom erratic behavior” became “actual erratic behavior,” as one analyst wrote. In the minute or so that controllers waited, the rocket and the guidance computer on the ground were never able to get themselves sorted out, because the “averaging” function that would have kept the rocket on course wasn’t programmed into the computer. And so the range safety officer did his job.

    A single handwritten line, the length of a hyphen, doomed the most elaborate spaceship the U.S. had until then designed, along with its launch rocket. Or rather, the absence of that bar doomed it. The error cost $18.5 million ($156 million today).

    In the popular press, for simplicity, the missing bar became a hyphen. The New York Times front-page headline was “For Want of a Hyphen Venus Rocket Is Lost.” The Los Angeles Times headline: “‘Hyphen’ Blows Up Rocket.” The science fiction writer Arthur C. Clarke, in his 1968 book The Promise of Space, called it “the most expensive hyphen in history.”

    For NASA’s computer programmers, it was a lesson in care, caution, and testing that ended up steeped into their bones. During 11 Apollo missions, more than 100 days total of spaceflight, the Apollo flight computers performed without a single fault.

    But what happened to Mariner 1 was, in fact, an arresting vulnerability of the new Space Age. A single missing bolt in a B-52 nuclear bomber wasn’t going to bring down the plane, but a single inattentive moment in computer programming—of the sort anyone can imagine having—could have a cascade of consequences.

    George Mueller was NASA’s associate administrator for manned spaceflight from 1963 to 1969, the most critical period for Apollo’s development. Just before that, Mueller had been an executive at Space Technology Laboratories, which had responsibility for writing the guidance equations for Mariner 1, including the equation with the missing bar.

    During his years at NASA, Mueller kept a reminder of the importance of even the smallest elements of spaceflight on the wall behind his desk: a framed image of a hyphen.

    #Histoire_numerique #Nasa #Mariner

  • Women Once Ruled Computers. When Did the Valley Become Brotopia? - Bloomberg

    Lena Söderberg started out as just another Playboy centerfold. The 21-year-old Swedish model left her native Stockholm for Chicago because, as she would later say, she’d been swept up in “America fever.” In November 1972, Playboy returned her enthusiasm by featuring her under the name Lenna Sjööblom, in its signature spread. If Söderberg had followed the path of her predecessors, her image would have been briefly famous before gathering dust under the beds of teenage boys. But that particular photo of Lena would not fade into obscurity. Instead, her face would become as famous and recognizable as Mona Lisa’s—at least to everyone studying computer science.

    In engineering circles, some refer to Lena as “the first lady of the internet.” Others see her as the industry’s original sin, the first step in Silicon Valley’s exclusion of women. Both views stem from an event that took place in 1973 at a University of Southern California computer lab, where a team of researchers was trying to turn physical photographs into digital bits. Their work would serve as a precursor to the JPEG, a widely used compression standard that allows large image files to be efficiently transferred between devices. The USC team needed to test their algorithms on suitable photos, and their search for the ideal test photo led them to Lena.

    According to William Pratt, the lab’s co-founder, the group chose Lena’s portrait from a copy of Playboy that a student had brought into the lab. Pratt, now 80, tells me he saw nothing out of the ordinary about having a soft porn magazine in a university computer lab in 1973. “I said, ‘There are some pretty nice-looking pictures in there,’ ” he says. “And the grad students picked the one that was in the centerfold.” Lena’s spread, which featured the model wearing boots, a boa, a feathered hat, and nothing else, was attractive from a technical perspective because the photo included, according to Pratt, “lots of high-frequency detail that is difficult to code.”

    Over the course of several years, Pratt’s team amassed a library of digital images; not all of them, of course, were from Playboy. The data set also included photos of a brightly colored mandrill, a rainbow of bell peppers, and several photos, all titled “Girl,” of fully clothed women. But the Lena photo was the one that researchers most frequently used. Over the next 45 years, her face and bare shoulder would serve as a benchmark for image-processing quality for the teams working on Apple Inc.’s iPhone camera, Google Images, and pretty much every other tech product having anything to do with photos. To this day, some engineers joke that if you want your image compression algorithm to make the grade, it had better perform well on Lena.

    “We didn’t even think about those things at all when we were doing this,” Pratt says. “It was not sexist.” After all, he continues, no one could have been offended because there were no women in the classroom at the time. And thus began a half-century’s worth of buck-passing in which powerful men in the tech industry defended or ignored the exclusion of women on the grounds that they were already excluded .

    Based on data they had gathered from the same sample of mostly male programmers, Cannon and Perry decided that happy software engineers shared one striking characteristic: They “don’t like people.” In their final report they concluded that programmers “dislike activities involving close personal interaction; they are generally more interested in things than in people.” There’s little evidence to suggest that antisocial people are more adept at math or computers. Unfortunately, there’s a wealth of evidence to suggest that if you set out to hire antisocial nerds, you’ll wind up hiring a lot more men than women.

    Cannon and Perry’s work, as well as other personality tests that seem, in retrospect, designed to favor men over women, were used in large companies for decades, helping to create the pop culture trope of the male nerd and ensuring that computers wound up in the boys’ side of the toy aisle. They influenced not just the way companies hired programmers but also who was allowed to become a programmer in the first place.

    In 1984, Apple released its iconic Super Bowl commercial showing a heroic young woman taking a sledgehammer to a depressing and dystopian world. It was a grand statement of resistance and freedom. Her image is accompanied by a voice-over intoning, “And you’ll see why 1984 won’t be like 1984.” The creation of this mythical female heroine also coincided with an exodus of women from technology. In a sense, Apple’s vision was right: The technology industry would never be like 1984 again. That year was the high point for women earning degrees in computer science, which peaked at 37 percent. As the number of overall computer science degrees picked back up during the dot-com boom, far more men than women filled those coveted seats. The percentage of women in the field would dramatically decline for the next two and a half decades.

    Despite having hired and empowered some of the most accomplished women in the industry, Google hasn’t turned out to be all that different from its peers when it comes to measures of equality—which is to say, it’s not very good at all. In July 2017 the search engine disclosed that women accounted for just 31 percent of employees, 25 percent of leadership roles, and 20 percent of technical roles. That makes Google depressingly average among tech companies.

    Even so, exactly zero of the 13 Alphabet company heads are women. To top it off, representatives from several coding education and pipeline feeder groups have told me that Google’s efforts to improve diversity appear to be more about seeking good publicity than enacting change. One noted that Facebook has been successfully poaching Google’s female engineers because of an “increasingly chauvinistic environment.”

    Last year, the personality tests that helped push women out of the technology industry in the first place were given a sort of reboot by a young Google engineer named James Damore. In a memo that was first distributed among Google employees and later leaked to the press, Damore claimed that Google’s tepid diversity efforts were in fact an overreach. He argued that “biological” reasons, rather than bias, had caused men to be more likely to be hired and promoted at Google than women.

    #Féminisme #Informatique #Histoire_numérique

  • Le Web a 30 ans. Et non, il n’était pas forcément mieux avant
    30 ans du Web : le revenge porn et les cyber-attaques ne sont pas nouveaux

    Le Web fête ses trente ans. Son anniversaire fait resurgir l’idée que le Web utopique du départ aurait glissé vers une version cauchemardesque. Cette conviction doit être relativisée. L’historienne Valérie Schafer rappelle que les conduites criminelles existent depuis les débuts, tout comme les initiatives visant à en faire un espace où règne créativité et égalité.

    Cela ne vous aura pas sans doute pas échappé, le Web fête ses trente ans aujourd’hui. Un anniversaire célébré en demi-teinte. Beaucoup de publications dénoncent ce que le Web est devenu : un espace perverti par la haine et une forme de capitalisme de surveillance. Dans une tribune Medium, l’un de ses créateurs, l’informaticien Tim Berners-Lee, estime que le Web souffre d’importants dysfonctionnements et qu’il convient de le sauver.
    Non le Web, c’était pas forcément mieux avant

    Si la haine sur les réseaux nous semble particulièrement prégnante aujourd’hui, l’idée d’un Web utopique à ses débuts devenu cauchemardesque doit être relativisée. « Le cyberharcèlement, les spams… existent déjà sur Internet, dans les mails, les forums, avant même le Web. Les premiers spams apparaissent dans les années 1970 », explique l’historienne Valérie Schafer.

    « Quand le Web se développe en France, il est accompagné au milieu des années 1990 d’une vague de procès, avec les premières plaintes de l’Union des étudiants juifs de France et de la Licra pour incitation à la haine raciale, mais aussi des affaires touchant à des contenus à caractère pédophile, ou encore la circulation d’une recette de fabrication de bombe sur Internet », raconte la spécialiste de l’histoire du numérique. France 2 consacrera même un reportage à cette recette de bombe en août 1995.
    Premier cas de revenge porn dans les 90s

    Même les premiers cas de « revenge porn » apparaissent dans années 1990. « Le Tribunal de Grande Instance de Privas se prononce en 1997 sur le cas d’un étudiant en informatique, qui a diffusé sur Internet des photographies à caractère pornographique de son ex petite amie accompagnées d’un commentaire sur "les mœurs" de celle-ci », précise Valérie Schafer.

    La violence sur les réseaux existe donc déjà depuis la création du Web. Dans une tribune Médium, Tim Berners-Lee convient lui-même qu’il sera difficile d’éradiquer ces comportements, même s’il est possible de les minimiser à l’aide de lois. Mais l’informaticien se montre sceptique vis-à-vis des projets législatifs visant à réguler les échanges sur les réseaux comme le règlement antiterroriste. Il estime, dans une interview au Monde, qu’ils pourraient conduire à la mise en place d’outils de censure massive. Et c’est l’un des paradoxes : faire d’Internet un endroit meilleur, plus apaisé, tout en évitant de trop le réguler.

    Le créateur du Web estime toutefois qu’il est possible de le « sauver » en jouant sur d’autres dysfonctionnements. Le fait que le Web repose en grande partie sur la publicité et la vente de données, notamment. Il encourage la population mondiale à se réunir autour d’un « contrat pour le Web » pour « discuter de ce dont nous avons besoin pour en faire un endroit meilleur et plus ouvert », explique-t-il au Monde. Mais peu d’actions très concrètes ressortent de son discours. Si ce n’est donner la possibilité aux gens de contrôler et de se servir de leurs données.
    Wikipedia, symbole d’un Web du partage

    La volonté de construire un cyberespace respectueux, libre et pas uniquement basé sur des échanges commerciaux n’est pas nouvelle. « Fondé sur l’ouverture, la gratuité, la participation, Wikipedia, qui démarre en 2001, incarne bien des valeurs d’un Web de l’information et du partage », évoque Valérie Schafer.

    Un certain nombre d’associations cherchent à préserver les valeurs d’Internet et du Web depuis sa création. A l’image de « l’Electronic frontier foundation créée dès les années 1990 », précise Valérie Schafer. Ou « Framasoft créée en 2001 et dont les outils proposent des alternatives à ceux des GAFAM en proposant de "Dégoogliser" Internet » et en promouvant le logiciel libre. »

    De multiples exemples montrent qu’égalité et créativité sont bien vivants sur le Web. Mais les préserver demande, selon Valérie Schafer, « une prise de conscience et des choix, de la part des internautes, des politiques en passant par les acteurs techniques et économiques. »

    #Histoire_numérique #Web #Valérie_Schafer

  • 30 ans du Web : « Il n’est pas trop tard pour changer le Web », affirme Tim Berners-Lee

    Le Web fête, ce mardi 12 mars 2019, son trentième anniversaire. Désormais dominé par des géants avides de données personnelles, parasité par des opérations de manipulation en tout genre, miné par les cyberattaques et sur le point d’être « balkanisé », il n’a jamais été aussi contesté.

    Pour autant, Tim Berners-Lee, qui a inventé le principe du Web il y a trois décennies dans un laboratoire suisse, est loin d’avoir abandonné tout espoir. Cet homme a déjà inventé le Web. Faut-il maintenant lui demander de le sauver ?
    Quand vous avez imaginé le Web, en 1989, anticipiez-vous qu’il allait devenir si important, ou pensiez-vous plus simplement donner naissance à un outil pour scientifiques ?

    Tim Berners-Lee : Non, ce n’était pas un outil seulement pour les scientifiques. J’ai toujours voulu qu’il soit plus que ça. Je voulais lier tout à tout. Depuis mon enfance, je pensais que les ordinateurs n’étaient pas bons pour faire des liens, contrairement au cerveau humain. Si vous avez une discussion dans un café et que vous y retournez cinq ans après, votre cerveau fera la connexion et vous vous souviendrez de la discussion. Je voulais construire quelque chose qui avait la propriété de lier n’importe quoi. Je ne m’attendais pas à ce qu’il soit utilisé pour tout lier ! Le point fort du Web, c’est qu’il est neutre, il a pu être utilisé pour poster des articles, des images, des vidéos, des données, des cartes… C’est pour cela que tout est en ligne désormais.
    Lire : Les débuts mouvementés de l’Internet en France
    Quels sont les principaux défis auxquels fait face le Web aujourd’hui ?

    En 2019, malheureusement, la liste est longue. Il y a quelques années, j’aurais pu évoquer la neutralité du Net, la vie privée ou le respect des femmes. Avant, si vous preniez quelqu’un au hasard dans la rue, il vous disait que le Web était super. Maintenant, il vous dira qu’il n’est pas digne de confiance, que c’est un endroit où on se sent manipulé, où l’on a perdu le contrôle… C’est pour cela que nous avons imaginé le « contrat pour le Web », qui appelle, notamment les entreprises des nouvelles technologies, à changer beaucoup de choses. Il demande aussi aux gens, aux gouvernements, de discuter de ce dont nous avons besoin pour faire du Web un endroit meilleur et plus ouvert.

    Les médias et l’industrie des nouvelles technologies ont répété que le consommateur avait fait un pacte avec le diable, qu’il s’était débarrassé de sa vie privée pour avoir des choses gratuites sur Internet. On a dit que la seule manière de faire des affaires sur Internet, c’était par la publicité et l’exploitation des données personnelles. Je pense que c’est un mythe qui explose devant nos yeux.

    Ce que la plupart de gens ne comprennent pas, c’est que leurs données ne sont pas utilisées contre eux mais contre tout le monde. Le scandale Cambridge Analytica a montré que les données pouvaient servir à manipuler les gens afin qu’ils votent d’une certaine manière. S’inquiéter de sa vie privée consistait à s’inquiéter de voir telle ou telle photo être rendue publique : mais il s’agit en fait de l’utilisation des données.

    #Histoire_numérique #Web #Tim_Berners_Lee

  • RECIT. « Personne n’a compris quoi que ce soit » : comment Tim Berners-Lee a créé le web il y a 30 ans

    Super article, avec des insights que je ne connaissais même pas !

    En tout cas, c’est clair, avec cette histoire, il devrait y avoir moyen de fêter les 30 ans du web tous les jours pendant quatre ou cinq ans...

    « Il m’arrivait d’avoir 50 comptes ouverts sur différents logiciels et sur différents ordinateurs pour échanger des données avec des collègues. » L’ingénieur français François Flückiger, qui a fait sa carrière au Centre européen pour la recherche nucléaire (Cern), a encore des sueurs quand il se souvient des difficultés à partager des informations avant la création du web, qui fête ses 30 ans mardi 12 mars.

    A la fin des années 1980, il fait partie de la poignée de scientifiques à être sur internet. Le Cern est connecté au réseau dès 1988. Cette année-là, le campus suisse situé entre le lac Léman et le massif du Jura est en pleine effervescence. Un immense chantier touche à sa fin : les équipes composées de scientifiques du monde entier ont enfin relié les 27 km de tunnel du grand collisionneur électron-positron (LEP), l’accélérateur de particules qui a précédé le LHC.
    De la difficulté d’échanger des données

    Pour avancer, cette communauté de chercheurs dispersée aux quatre coins de la planète a besoin de partager une immense masse de données disparates. « Les physiciens doivent échanger tous les documents de travail qui permettent aux collaborations de fonctionner. Ce sont les notes de réunion, les articles écrits en commun, mais surtout les documents de conception et de réalisation des détecteurs » du LEP, explique François Flückiger, alors chargé des réseaux externes au Cern.

    Mais les échanges sont lents et fastidieux. Avant chaque action, les utilisateurs doivent s’identifier. Puis, pour que les échanges aient eu lieu entre deux machines, un premier ordinateur doit en appeler un autre et ce dernier doit rappeler son homologue. « Partager de l’information, à l’époque, c’était compliqué et ça marchait mal », résume François Flückiger, évoquant la « tyrannie des logins » et la « guerre des protocoles ».

    C’était extrêmement complexe d’utiliser internet. C’était infernal.François Flückigerà franceinfo

    Aujourd’hui, dans le langage courant, les termes « internet » et « web » sont devenus interchangeables. Mais il convient de les distinguer. Internet, qui est né dans les années 1970, est, en résumé, l’infrastructure qui permet d’interconnecter des ordinateurs et des objets. Le web, lui, n’est que l’une des applications qui utilisent ce réseau, comme, entre autres, la messagerie électronique, la téléphonie ou la vidéophonie.

    Et avant l’arrivée du web, l’utilisation d’internet relève du parcours du combattant. Face à ces difficultés, des membres du Cern cherchent des solutions. Parmi eux se trouve Tim Berners-Lee. Ce Britannique, physicien de formation et autodidacte en informatique, fait partie d’une équipe qui déploie la technologie Remote Protocol Control, permettant d’appeler depuis son ordinateur des programmes se trouvant sur d’autres machines.
    Au commencement était un schéma

    Il n’y a pas eu de « moment Eureka », comme le raconte la légende concernant Isaac Newton sous son pommier, répète souvent Tim Berners-Lee. Mais à la fin de l’année 1988, le physicien de 34 ans fait part à son supérieur, Mike Sendall, de sa réflexion sur l’amélioration du partage de données. Il lui parle d’un système fondé sur internet et l’hypertexte, autrement dit les liens tels que nous les connaissons toujours aujourd’hui (comme ce lien qui renvoie vers les mémoires de Tim Berners-Lee). En réalité, le Britannique lui propose une version améliorée d’Enquire, un système qu’il avait mis au point quelques années auparavant. Ce système, lui aussi fondé sur l’hypertexte, liait les noms des chercheurs à leurs thèmes de travail.

    Mike Sendall lui demande de rédiger une note à ce sujet. Tim Berners-Lee la lui remet le 12 mars 1989. Le document de 16 pages, disponible sur le site du Cern (PDF), est sobrement intitulé « gestion de l’information : une proposition ». Il montre un schéma buissonnant avec des ronds, des rectangles et des nuages, tous reliés par des flèches. L’idée est de lier entre eux des documents variés du Cern qui, à l’origine, n’ont rien à voir entre eux. « Vague but exciting » ("vague mais excitant"), écrit laconiquement Mike Sendall en haut de la première page de ce document, aujourd’hui considéré comme l’acte fondateur du web.

    Aperçu de la note de Tim Berners-Lee déposée en mars 1989, présentant le principe du web, avec le commentaire écrit de son supérieur Mike Sendall \"vague but exciting...\"
    Aperçu de la note de Tim Berners-Lee déposée en mars 1989, présentant le principe du web, avec le commentaire écrit de son supérieur Mike Sendall « vague but exciting... » (CERN)

    « En 1989, je peux vous assurer que personne n’a compris quoi que ce soit », affirme François Flückiger, qui travaillait dans le même bâtiment que Tim Berners-Lee, à un étage de différence. Et d’insister : "Mike Sendall a écrit ça ["vague but exciting"] mais c’était vraiment incompréhensible." « Je ne pense pas que quelqu’un ait dit que c’était fou », commente dans le documentaire The Web, Past and Future Peggie Rimmer, l’une des supérieures de Tim Berners-Lee.

    Vous devez d’abord comprendre quelque chose avant que vous puissiez dire que c’est fou. Nous n’avons jamais atteint ce point.Peggie Rimmerdans « The Web, Past and Future »

    Aussi incompréhensible soit-elle, cette proposition n’est pas totalement isolée. La même année, sur le même campus, à un kilomètre d’écart, Robert Cailliau a une intuition proche de celle de Tim Berners-Lee. « J’ai écrit une proposition pour étudier les hypertextes par les réseaux du Cern parce que je voyais beaucoup de physiciens qui transportaient des disquettes ou les envoyaient les uns aux autres alors qu’en fait il y avait un réseau », a-t-il expliqué en 2016 lors d’une conférence donnée à l’université de Fribourg (Suisse).

    Mais le Belge met rapidement de côté son projet et se joint au Britannique. Selon ses explications, la proposition de Tim Berners-Lee, « fondée sur internet », « était beaucoup plus ouverte, beaucoup plus utilisable ». Si Tim Berners-Lee fait un premier converti, ses supérieurs l’ignorent poliment. Ils ne peuvent lui allouer de moyens : son idée concerne d’abord l’informatique et non la physique, l’objet premier du Cern. Cela n’empêche pas son supérieur de l’encourager passivement en le laissant faire sur son temps libre.
    Un puissant ordinateur et un nom temporaire

    Le tandem britannico-belge se met au travail. Le Britannique se penche sur l’aspect technique, tandis que le Belge, présent au Cern depuis longtemps, fait marcher ses réseaux et joue les évangélistes au sein de l’institution. « Il a beaucoup œuvré à formuler la pensée de Tim Berners-Lee avec des mots simples et compréhensibles par d’autres communautés », explique Fabien Gandon, directeur de recherches en informatique à l’Inria, qui connaît Tim Berners-Lee. Selon François Flückiger, Robert Cailliau est un « excellent communicant » contrairement à Tim Berners-Lee qui, à l’époque, est plutôt perçu comme un « professeur Tournesol ». Pour lui, l’apport de Robert Cailliau est crucial.

    Robert Cailliau n’est pas le co-inventeur du web, comme cela a pu être écrit, mais il n’y aurait pas eu de web sans lui.François Flückigerà franceinfo

    Au début de l’année 1990, un ordinateur NeXT – la marque fraîchement lancée par Steve Jobs – arrive au Cern. Tim Berners-Lee, impressionné, demande à son supérieur la possibilité d’en acquérir un. Cet outil, particulièrement puissant pour l’époque, est idéal pour développer son projet. Mike Sendall valide : il justifie cet achat en expliquant que Tim Berners-Lee va explorer les éventuelles utilisations de cet ordinateur pour l’exploitation du LEP.

    Tim Berners-Lee et Robert Cailliau posent avec l\’ordinateur NeXT sur lequel le Britannique a codé les premiers outils du web, à Genève (Suisse), le 13 mars 2009.
    Tim Berners-Lee et Robert Cailliau posent avec l’ordinateur NeXT sur lequel le Britannique a codé les premiers outils du web, à Genève (Suisse), le 13 mars 2009. (MARTIAL TREZZINI/AP/SIPA)

    En attendant que l’ordinateur arrive, la réflexion de Tim Berners-Lee progresse. En mai 1990, il fait une seconde proposition (PDF) et y évoque le vocable de « mesh » ("filet") pour désigner son idée. Le même mois, en compagnie de Robert Cailliau, il se penche sérieusement sur le nom du projet. Le Belge raconte dans une note (en anglais) vouloir écarter d’emblée les références à des dieux grecs ou à la mythologie égyptienne, une habitude à la mode chez les scientifiques. « J’ai regardé dans la mythologie nordique mais je n’ai rien trouvé qui convenait », précise-t-il auprès du New York Times (en anglais) en 2010.

    Tim Berners-Lee, lui, a plusieurs pistes. Il pense donc à « mesh » mais l’écarte rapidement car il trouve que la sonorité ressemble trop à « mess » ("bazar"). La possibilité de l’appeler « Mine of information » traverse également son esprit mais il trouve que l’acronyme MOI est trop égocentrique. Même réflexion pour « The information machine » dont l’acronyme TIM résonnerait comme une autocélébration. Le Britannique affectionne également « World Wide Web » ("la toile d’araignée mondiale"). Ses collègues sont sceptiques. Ils soulignent que l’acronyme « www » est long à prononcer en anglais : « double-u, double-u, double-u ».

    Dans ses mémoires, Tim Berners-Lee précise que pour Robert Cailliau, qui parle flamand, et comme pour ceux qui parlent des langues scandinaves, « www » se prononce simplement « weh, weh, weh ». « World Wide Web » finit par figurer sur la proposition commune des deux hommes déposée le 12 novembre 1990 (PDF). Mais il ne s’agit, pensent-ils, que d’une solution temporaire.
    Il ne fallait surtout pas éteindre le premier serveur

    Entre temps, l’ordinateur NeXT a fini par être livré, en septembre 1990. De quoi ravir Tim Berners-Lee, se souvient Ben Segal, le mentor du Britannique. « Il m’a dit : ’Ben, Ben, c’est arrivé, viens voir !’ Je suis allé dans son bureau et j’ai vu ce cube noir sexy. » Tim Berners-Lee peut enfin donner forme à son projet. Il s’enferme et propose, à quelques jours de Noël, le 20 décembre, la première page web de l’histoire et un navigateur appelé lui-même World Wide Web. Ce premier site, visible à cette adresse, pose l’ambition encyclopédiste du web et affirme que le projet « entend fournir un accès universel à un large univers de documents ». Il propose, entre autres, une présentation, une bibliographie et quelques liens.

    Capture d\’écran de la reproduction du premier site web mis en ligne en décembre 1990 par Tim Berners-Lee.
    Capture d’écran de la reproduction du premier site web mis en ligne en décembre 1990 par Tim Berners-Lee. (CERN)

    L’ensemble tient grâce aux trouvailles imaginées et développées par le Britannique : le protocole HTTP (grâce auquel des machines peuvent échanger entre elles sans les lourdeurs jusqu’alors nécessaires), la notion d’URL (qui donne une adresse précise à chaque document disponible sur le réseau) et le langage HTML (langage informatique qui permet d’écrire et de mettre en forme les pages web).

    Si le protocole HTTP et le langage HTML marchent si bien ensemble, c’est parce qu’ils proviennent d’un seul et même cerveau.François Flückigerà franceinfo

    Le fameux ordinateur NeXT de Tim Berners-Lee sert de serveur à ce web embryonnaire. Autrement dit : sans lui, pas de web. Pour que personne ne l’éteigne par mégarde, il colle dessus une étiquette et écrit en rouge « Cette machine est un serveur. NE PAS ÉTEINDRE !! »
    Le web tisse sa toile

    Dix-huit mois après la première proposition, la donne change totalement. François Flückiger le concède sans détour : ce n’est qu’à partir de cette première mise en ligne qu’il est convaincu par l’innovation de Tim Berners-Lee, anticipant au moins un succès au sein de la communauté scientifique. Le projet séduit également le Français Jean-François Groff. Ce jeune ingénieur en télécom de 22 ans vient de débarquer au Cern, dans le cadre de son service civil, « pour travailler sur l’acquisition de données ». « Tim Berners-Lee était un voisin de bureau et c’est un collègue qui nous a présentés assez vite à mon arrivée », raconte-t-il. Aussitôt, c’est l’entente parfaite. « J’avais la culture nécessaire pour comprendre ce qu’il faisait. Et étant exposé au succès du minitel en France, j’ai tout de suite saisi la portée que pourrait avoir son sytème », ajoute-t-il.

    Le jeune Français fait rapidement part de ses idées à celui qui travaille alors seul au développement du projet. Pour lui, le système doit tourner sur tout type de plateforme. « Tim était d’accord. Mais il nous fallait un peu de temps et de ressources pour transférer ce prototype », relate Jean-François Groff. Ce dernier se met alors à travailler « en sous-marin » avec Tim Berners-Lee pour « écrire une librairie de logiciels ». Au cœur de l’hiver, il ne compte pas les heures supplémentaires à coder en écoutant à la radio les dernières nouvelles de la guerre du Golfe.

    Souvent, je terminais vers 17 ou 18 heures ma journée normale. Je rentrais chez moi, je mangeais et je rejoignais Tim à 21 heures jusqu’à 2 ou 3 heures du matin.Jean-François Groffà franceinfo

    Avec le travail accumulé, l’ouverture s’accélère. En mars, le logiciel est mis à disposition à des collègues sur des ordinateurs du Cern. A la même période, Jean-François Groff bascule, de façon non officielle, à plein temps avec Tim Berners-Lee.

    Le 6 août, le Britannique fait part de son innovation à l’extérieur du Cern. Il partage sur un groupe de discussion un texte présentant les grandes lignes de son projet. « Nous sommes très intéressés par le fait de propager le web dans d’autres endroits. (...) Les collaborateurs sont les bienvenus », écrit-il. C’est avec cette annonce que le web commence à intéresser du monde, à tisser sa toile sur d’autres campus et à se répandre sur la planète. Le début d’une révolution historique qui connaît un coup d’accélérateur déterminant lorsque le Cern verse le web dans le domaine public en avril 1993.

    Mais aujourd’hui Tim Berners-Lee se dit « dévasté » par ce qu’est devenu le web. Il regrette la toute puissance d’une poignée de géants comme Google, Amazon ou encore Facebook, et déplore l’utilisation qui est faite des données des utilisateurs. Le Britannique, qui a été anobli en 2004, milite désormais pour un web décentralisé. Avec son nouveau système baptisé Solid (en anglais), il souhaite que les internautes « reprennent le pouvoir » sur leurs données personnelles. « Il n’y aura plus de streaming reposant uniquement sur la publicité, a-t-il anticipé lors d’une conférence, en octobre 2018. Du point de vue des développeurs, leur seule préoccupation sera de construire des services utiles pour les utilisateurs. » Une ambition qui renverse en grande partie le modèle économique du web actuel, et renoue avec l’idéal des débuts.

    #Histoire_numérique #Web #Tim_Berners_Lee

    • MESH !!! C’est aussi le nom qui avait été donné à une application géniale qui transformait ton smartphone en talkie-walkie. J’en attendais beaucoup mais ça n’a jamais pris et j’ai jamais compris pourquoi. J’ai voulu le re-tester récemment et j’ai vu que l’application demandait l’autorisation d’accéder à un paquet de données, je sais plus trop quoi en penser... Bref, là n’est pas le sujet : #merci @hlc pour cette super trouvaille qui ne nous rajeuni pas !

    • y a pas à dire, ça ne nous rajeunit pas tout ça. Mais dites-moi, vous autres, ça fait combien de temps que vous êtes accro à Internet ? Moi perso, ma première connexion (en RTC puisque habitant dans un bled d’à peine 200 habitants) remonte à 2002. J’avais 45 ans. Mon premier PC ? Acheté en 1999 (4,3 Go de disque dur, 64 Mo de RAM, processeur AMD K6-2 cadencé à 350 Mhz). On commençait à parler de Google qui se définissait comme un « méta-moteur » (de recherche) et certains se la pétaient en prétendant pouvoir télécharger des trucs copyrightés sur Napster. Les gosses utilisaient eMule et ça prenait plusieurs jours pour télécharger un film (56 kbps oblige). J’arrivais même pas à visionner des vidéos sur Youtube. Le FAI que j’avais choisi : Free, car assez compétitif sur les offres bas-débit (15 €/mois pour 50 h de connexion). Après, 4 ans plus tard, quand l’ADSL est arrivé jusque chez moi, je suis passé obligatoirement chez Orange vu que l’opérateur « historique » n’avait pas encore ouvert ses tuyaux à la concurrence pour les bouseux.
      Et sinon, je naviguais avec Internet Explorer (navigateur par défaut installé sur Windows 98). Quelques années plus tard je suis passé à Firefox en commandant un CD-ROM d’installation avec le manuel utilisateur (en anglais) arrivé dans une belle pochette tout droit des États-Unis. J’avais pas dû comprendre assez rapidement qu’on pouvait l’avoir gratos en le téléchargeant en ligne mais vu le débit que j’avais sur ma connexion RTC, je me dis que j’ai pu gagner du temps en soutenant financièrement la Mozilla Foundation ...

    • Ah, c’est beau d’être jeune... Ma première navigation sur le web, c’était en 93, sur un terminal VT 220 (caractères oranges sur fond noir, 25 lignes de 80 colonnes). Les liens étaient en surbrillance, entre crochet avec des numéros... et il fallait faire « escape + le numéro » pour suivre le lien. Comme dans Lynx quoi ;-) Le modem à 9600 bauds était un champion de course par rapport au modem à 300 bauds et écouteurs sur lesquels on plantait le téléphone pour la porteuse qui fut ma première connexion (transpac, hein, pas internet... en 1984, fallait vraiment être un geek comme @Bortzmeyer pour avoir un accès internet...)

    • Ma première page web (en mode local, je n’avais pas de serveur) était en 94. Je devais faire une présentation à une conférence. J’avais insisté pour avoir un vidéoprojecteur. A l’époque, c’était des machines énormes, 1,5m de long !!! Mais les gens étaient enthousiastes : depuis des heures les bavards parlaient de l’internet, mais dans la salle personne n’avait jamais rien vu... alors autant dire que ma démo a soulevé les foules.
      Evidemment, c’était une copie locale de chose existant sur le web... mais c’est la magie du spectacle : faire croire qu’on est en direct live alors qu’on fait du playback.
      Bon, la fin de l’histoire, c’est que ça a tellement plu qu’on m’a invité à une conférence à Moscou en 94. Valab !
      Faut dire aussi que tout avait failli mal tourner : le disque dur externe de 10Mo (oui Mega) sur lequel j’avais préparé ma démo est tombé de mon bureau la veille... tout éclaté. J’ai couru dans le labo d’informatique pour faire ressouder... et quand je suis arrivé, les techniciens n’avaient pas vraiment envie de toucher à ça. J’ai fait la soudure moi même. Et hop, c’est reparti !!!

    • en 91, pendant l’organisation d’un festival d’art contemporain avec quelques amis ( https://www.le-terrier.net/albums/acte_festival/index.htm ), nos recherches de jeunes artistes dans les écoles d’art et ailleurs nous avaient conduit au départ de notre périple à rencontrer, aux beaux arts de Nantes Pierre Giquel et Joachim Pfeufer, qui y enseignaient. On a assisté pendant notre séjour à une drôle d’expérience qui les tenaient en haleine depuis pas mal de temps (je ne sais pas si c’était lié ou non au projet Poïpoïdrome ou pas, je crois que oui, faudrait leur demander) : tenter d’échanger entre trois pôles géographiques très éloignés (mes souvenirs me disent Japon et USA, mais hmmm...) des images (façon de parler) simultanément.
      Je trouvais si disproportionnés les efforts techniques mis en place et le résultat, que je ne voyais pas du tout l’intérêt de ce bordel, et moins encore artistiquement.
      Cinq ans après, la toute première version du Terrier apparaissait sur Mygale et me tenait éveillé des nuits entières à bricoler des pages HTML et je considérais que j’étais en train de vivre l’aventure artistique la plus excitante depuis l’invention de la video.

    • Ben non j’étais aux Arts Déco, sur la date je suis à peu près sûr que c’était en deuxième année, donc 87-88. Le cours d’histoire de l’art de Don Foresta. Ou alors en vidéo. Puisqu’il donnait des cours de vidéo aussi, certains auxquels j’ai assisté quand bien même je n’étais pas en vidéo. Il faudrait que je demande à une amie qui y était aussi et avec laquelle je suis encore en contact.

      En tout cas je suis sûr que ce n’était pas aux États-Unis, où je n’ai jamais entendu parler de connexion pendant les trois ans où j’y ai étudié (88-91).

      Et je suis sûr que ce n’était pas au Japon où de fait je ne suis jamais allé.

    • Et sinon j’ai le souvenir de connexion entre des postes connectés des deux côtés de l’Atlantique entre Paris et les Etats-Unis dans une école de photographie où j’ai accompagné Robert Heinecken qui y donnait un stage. Il y avait une sorte de partage d’écrans à distance, mais cela ne fonctionnait pas bien du tout. L’école en question était rue Jules Vallès à Paris et ça c’était en 1991 ou 1992.

    • Première connexion en 1992, j’étais en DEA au Canada (Guelph), mais je n’avais qu’un seul correspondant, un copain qui faisait sa thèse à Orsay.

      Retour à Paris, à Jussieu, je demande à créer un nom de domaine pour mon labo, mais je suis le seul à l’utiliser, les autres ne comprenant pas à quoi ça peut servir. Mes correspondants ne sont encore que des copains scientifiques étudiant dans des universités.

      Je crois qu’il faut attendre 1995, mon post-doc aux USA, pour que je commence à avoir des correspondants qui ne sont pas des scientifiques. A l’époque mes mails sont en fichier texte et j’archive tout. Jusqu’en 2005, une année de mail « pèse » à peine 10 Mo...

    • Alors ça ! Je suis impressionnée par les croisements de liens que contiennent ces commentaires. Je sais pas par où commencer, donc autant le faire chronologiquement :

      – en 1987/88 j’ai eut un prof de math, remplaçant, très jeune, et tellement passionné qu’il a réussi en quelques semaines à élever le niveau de la classe de premières littéraires dont je faisais partie. Je me rappelle particulièrement de son cours où il nous avait expliqué et fait découvrir sa passion : ce truc qui s’appelait ordinateur et faisait plein de trucs à partir du 0 et du 1 ! Je revois très bien ce machin, bizarre, cette sorte de télé sans image. Je me rappelle m’être dit que c’était complètement loufoque de tout « coder » en 0 et 1 et que c’était assez surréaliste pour me plaire...

      – je me rappelle quelques années plus tard, 1992/93, de l’émulation et de l’électricité qui régnait dans le sous-sol de l’école des Beaux-Arts de Nantes autour de Joachim Pfeufer et des quelques écrans, toujours squattés, que je n’avais jamais vraiment pu approcher. Du coup j’avais feint le dédain, mais surtout, c’était l’autre labo, celui de photo, qui m’attirait. (Bon en fait j’ai fini par m’embrouiller avec quasi tous les profs, donc j’ai pas gardé de supers souvenirs de cette seule année passée dans ce lieu qui m’avait tant fait rêver pendant toute ma scolarité...)

      #mon_premier_ordinateur_à_moi, avec accès immédiat à internet puisque c’est pour ça que je l’avais acheté, ça a été quelques années plus tard, mais j’arrive plus à me rappeler précisément, je dirai 1998 ou 99. Je l’avais pris en leasing chez Ooreka et ... ça a bouleversé ma vie, ça m’a permis d’apprendre, de comprendre, de travailler... à tel point que lorsque mon appartement a été incendié en 2000, c’est une des premières choses que j’ai voulu récupérer (après mes négatifs et mon appareil photo, bien sûr !)

      Je vois que nulle part n’est mentionné le #minitel, mais pour moi il fait partie de l’histoire, un peu, aussi ;)

    • @val_k oui, le minitel a pas mal pris de place également dans l’histoire pour moi également ; j’ai pu y découvrir toute la fécondité de l’hétéronymie : un ami avait monté un serveur en 88, à Rennes, qu’il avait appelé Factel. J’y passais des nuits entières à composer des textes/images, sous divers hétéronymes, que je postais sur le forum, tirant des réponses des autres connectés la matière pour les réalisations suivantes ; j’ai quelque part dans mon bordel des pages imprimées sur une matricielle à aiguilles, sans doute très délavées, de ces espèces de calligrammes électroniques. Tout ça a été la matière première de ce qui allait devenir dans les newsgroups (frap, essentiellement) les newsgroup-poems de Olivier Wattez.
      celui-là jouait avec le tempo des expéditions (comme pas mal de gens causaient sur frap, des messages hétérogènes pouvaient s’insérer à tout moment dans une tentative de filer des messages)


      celui ci sur les imbrications de messages produisant des changements de sens


      celui-ci sur les limites de l’ascii imposées par les contraintes de cadre formel des newsgroup


  • CERN 2019 WorldWideWeb Rebuild

    Hello, World

    In December 1990, an application called WorldWideWeb was developed on a NeXT machine at The European Organization for Nuclear Research (known as CERN) just outside of Geneva. This program – WorldWideWeb — is the antecedent of most of what we consider or know of as “the web” today.

    In February 2019, in celebration of the thirtieth anniversary of the development of WorldWideWeb, a group of developers and designers convened at CERN to rebuild the original browser within a contemporary browser, allowing users around the world to experience the rather humble origins of this transformative technology.
    Party like it’s 1989

    Ready to browse the World Wide Web using WorldWideWeb?

    Launch the WorldWideWeb browser.
    Select “Document” from the menu on the side.
    Select “Open from full document reference”.
    Type a URL into the “reference” field.
    Click “Open”.

    #Histoire_numérique #WWW #CERN

  • Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy? | The New Yorker

    Since 2011, Zuckerberg has lived in a century-old white clapboard Craftsman in the Crescent Park neighborhood, an enclave of giant oaks and historic homes not far from Stanford University. The house, which cost seven million dollars, affords him a sense of sanctuary. It’s set back from the road, shielded by hedges, a wall, and mature trees. Guests enter through an arched wooden gate and follow a long gravel path to a front lawn with a saltwater pool in the center. The year after Zuckerberg bought the house, he and his longtime girlfriend, Priscilla Chan, held their wedding in the back yard, which encompasses gardens, a pond, and a shaded pavilion. Since then, they have had two children, and acquired a seven-hundred-acre estate in Hawaii, a ski retreat in Montana, and a four-story town house on Liberty Hill, in San Francisco. But the family’s full-time residence is here, a ten-minute drive from Facebook’s headquarters.

    Occasionally, Zuckerberg records a Facebook video from the back yard or the dinner table, as is expected of a man who built his fortune exhorting employees to keep “pushing the world in the direction of making it a more open and transparent place.” But his appetite for personal openness is limited. Although Zuckerberg is the most famous entrepreneur of his generation, he remains elusive to everyone but a small circle of family and friends, and his efforts to protect his privacy inevitably attract attention. The local press has chronicled his feud with a developer who announced plans to build a mansion that would look into Zuckerberg’s master bedroom. After a legal fight, the developer gave up, and Zuckerberg spent forty-four million dollars to buy the houses surrounding his. Over the years, he has come to believe that he will always be the subject of criticism. “We’re not—pick your noncontroversial business—selling dog food, although I think that people who do that probably say there is controversy in that, too, but this is an inherently cultural thing,” he told me, of his business. “It’s at the intersection of technology and psychology, and it’s very personal.”

    At the same time, former Facebook executives, echoing a growing body of research, began to voice misgivings about the company’s role in exacerbating isolation, outrage, and addictive behaviors. One of the largest studies, published last year in the American Journal of Epidemiology, followed the Facebook use of more than five thousand people over three years and found that higher use correlated with self-reported declines in physical health, mental health, and life satisfaction. At an event in November, 2017, Sean Parker, Facebook’s first president, called himself a “conscientious objector” to social media, saying, “God only knows what it’s doing to our children’s brains.” A few days later, Chamath Palihapitiya, the former vice-president of user growth, told an audience at Stanford, “The short-term, dopamine-driven feedback loops that we have created are destroying how society works—no civil discourse, no coöperation, misinformation, mistruth.” Palihapitiya, a prominent Silicon Valley figure who worked at Facebook from 2007 to 2011, said, “I feel tremendous guilt. I think we all knew in the back of our minds.” Of his children, he added, “They’re not allowed to use this shit.” (Facebook replied to the remarks in a statement, noting that Palihapitiya had left six years earlier, and adding, “Facebook was a very different company back then.”)

    In March, Facebook was confronted with an even larger scandal: the Times and the British newspaper the Observer reported that a researcher had gained access to the personal information of Facebook users and sold it to Cambridge Analytica, a consultancy hired by Trump and other Republicans which advertised using “psychographic” techniques to manipulate voter behavior. In all, the personal data of eighty-seven million people had been harvested. Moreover, Facebook had known of the problem since December of 2015 but had said nothing to users or regulators. The company acknowledged the breach only after the press discovered it.

    We spoke at his home, at his office, and by phone. I also interviewed four dozen people inside and outside the company about its culture, his performance, and his decision-making. I found Zuckerberg straining, not always coherently, to grasp problems for which he was plainly unprepared. These are not technical puzzles to be cracked in the middle of the night but some of the subtlest aspects of human affairs, including the meaning of truth, the limits of free speech, and the origins of violence.

    Zuckerberg is now at the center of a full-fledged debate about the moral character of Silicon Valley and the conscience of its leaders. Leslie Berlin, a historian of technology at Stanford, told me, “For a long time, Silicon Valley enjoyed an unencumbered embrace in America. And now everyone says, Is this a trick? And the question Mark Zuckerberg is dealing with is: Should my company be the arbiter of truth and decency for two billion people? Nobody in the history of technology has dealt with that.”

    In 2002, Zuckerberg went to Harvard, where he embraced the hacker mystique, which celebrates brilliance in pursuit of disruption. “The ‘fuck you’ to those in power was very strong,” the longtime friend said. In 2004, as a sophomore, he embarked on the project whose origin story is now well known: the founding of Thefacebook.com with four fellow-students (“the” was dropped the following year); the legal battles over ownership, including a suit filed by twin brothers, Cameron and Tyler Winklevoss, accusing Zuckerberg of stealing their idea; the disclosure of embarrassing messages in which Zuckerberg mocked users for giving him so much data (“they ‘trust me.’ dumb fucks,” he wrote); his regrets about those remarks, and his efforts, in the years afterward, to convince the world that he has left that mind-set behind.

    New hires learned that a crucial measure of the company’s performance was how many people had logged in to Facebook on six of the previous seven days, a measurement known as L6/7. “You could say it’s how many people love this service so much they use it six out of seven days,” Parakilas, who left the company in 2012, said. “But, if your job is to get that number up, at some point you run out of good, purely positive ways. You start thinking about ‘Well, what are the dark patterns that I can use to get people to log back in?’ ”

    Facebook engineers became a new breed of behaviorists, tweaking levers of vanity and passion and susceptibility. The real-world effects were striking. In 2012, when Chan was in medical school, she and Zuckerberg discussed a critical shortage of organs for transplant, inspiring Zuckerberg to add a small, powerful nudge on Facebook: if people indicated that they were organ donors, it triggered a notification to friends, and, in turn, a cascade of social pressure. Researchers later found that, on the first day the feature appeared, it increased official organ-donor enrollment more than twentyfold nationwide.

    Sean Parker later described the company’s expertise as “exploiting a vulnerability in human psychology.” The goal: “How do we consume as much of your time and conscious attention as possible?” Facebook engineers discovered that people find it nearly impossible not to log in after receiving an e-mail saying that someone has uploaded a picture of them. Facebook also discovered its power to affect people’s political behavior. Researchers found that, during the 2010 midterm elections, Facebook was able to prod users to vote simply by feeding them pictures of friends who had already voted, and by giving them the option to click on an “I Voted” button. The technique boosted turnout by three hundred and forty thousand people—more than four times the number of votes separating Trump and Clinton in key states in the 2016 race. It became a running joke among employees that Facebook could tilt an election just by choosing where to deploy its “I Voted” button.

    These powers of social engineering could be put to dubious purposes. In 2012, Facebook data scientists used nearly seven hundred thousand people as guinea pigs, feeding them happy or sad posts to test whether emotion is contagious on social media. (They concluded that it is.) When the findings were published, in the Proceedings of the National Academy of Sciences, they caused an uproar among users, many of whom were horrified that their emotions may have been surreptitiously manipulated. In an apology, one of the scientists wrote, “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

    Facebook was, in the words of Tristan Harris, a former design ethicist at Google, becoming a pioneer in “ persuasive technology.

    Facebook had adopted a buccaneering motto, “Move fast and break things,” which celebrated the idea that it was better to be flawed and first than careful and perfect. Andrew Bosworth, a former Harvard teaching assistant who is now one of Zuckerberg’s longest-serving lieutenants and a member of his inner circle, explained, “A failure can be a form of success. It’s not the form you want, but it can be a useful thing to how you learn.” In Zuckerberg’s view, skeptics were often just fogies and scolds. “There’s always someone who wants to slow you down,” he said in a commencement address at Harvard last year. “In our society, we often don’t do big things because we’re so afraid of making mistakes that we ignore all the things wrong today if we do nothing. The reality is, anything we do will have issues in the future. But that can’t keep us from starting.”

    In contrast to a traditional foundation, an L.L.C. can lobby and give money to politicians, without as strict a legal requirement to disclose activities. In other words, rather than trying to win over politicians and citizens in places like Newark, Zuckerberg and Chan could help elect politicians who agree with them, and rally the public directly by running ads and supporting advocacy groups. (A spokesperson for C.Z.I. said that it has given no money to candidates; it has supported ballot initiatives through a 501(c)(4) social-welfare organization.) “The whole point of the L.L.C. structure is to allow a coördinated attack,” Rob Reich, a co-director of Stanford’s Center on Philanthropy and Civil Society, told me. The structure has gained popularity in Silicon Valley but has been criticized for allowing wealthy individuals to orchestrate large-scale social agendas behind closed doors. Reich said, “There should be much greater transparency, so that it’s not dark. That’s not a criticism of Mark Zuckerberg. It’s a criticism of the law.”

    La question des langues est fondamentale quand il s’agit de réseaux sociaux

    Beginning in 2013, a series of experts on Myanmar met with Facebook officials to warn them that it was fuelling attacks on the Rohingya. David Madden, an entrepreneur based in Myanmar, delivered a presentation to officials at the Menlo Park headquarters, pointing out that the company was playing a role akin to that of the radio broadcasts that spread hatred during the Rwandan genocide. In 2016, C4ADS, a Washington-based nonprofit, published a detailed analysis of Facebook usage in Myanmar, and described a “campaign of hate speech that actively dehumanizes Muslims.” Facebook officials said that they were hiring more Burmese-language reviewers to take down dangerous content, but the company repeatedly declined to say how many had actually been hired. By last March, the situation had become dire: almost a million Rohingya had fled the country, and more than a hundred thousand were confined to internal camps. The United Nations investigator in charge of examining the crisis, which the U.N. has deemed a genocide, said, “I’m afraid that Facebook has now turned into a beast, and not what it was originally intended.” Afterward, when pressed, Zuckerberg repeated the claim that Facebook was “hiring dozens” of additional Burmese-language content reviewers.

    More than three months later, I asked Jes Kaliebe Petersen, the C.E.O. of Phandeeyar, a tech hub in Myanmar, if there had been any progress. “We haven’t seen any tangible change from Facebook,” he told me. “We don’t know how much content is being reported. We don’t know how many people at Facebook speak Burmese. The situation is getting worse and worse here.”

    I saw Zuckerberg the following morning, and asked him what was taking so long. He replied, “I think, fundamentally, we’ve been slow at the same thing in a number of areas, because it’s actually the same problem. But, yeah, I think the situation in Myanmar is terrible.” It was a frustrating and evasive reply. I asked him to specify the problem. He said, “Across the board, the solution to this is we need to move from what is fundamentally a reactive model to a model where we are using technical systems to flag things to a much larger number of people who speak all the native languages around the world and who can just capture much more of the content.”

    Lecture des journaux ou des aggrégateurs ?

    once asked Zuckerberg what he reads to get the news. “I probably mostly read aggregators,” he said. “I definitely follow Techmeme”—a roundup of headlines about his industry—“and the media and political equivalents of that, just for awareness.” He went on, “There’s really no newspaper that I pick up and read front to back. Well, that might be true of most people these days—most people don’t read the physical paper—but there aren’t many news Web sites where I go to browse.”

    A couple of days later, he called me and asked to revisit the subject. “I felt like my answers were kind of vague, because I didn’t necessarily feel like it was appropriate for me to get into which specific organizations or reporters I read and follow,” he said. “I guess what I tried to convey, although I’m not sure if this came across clearly, is that the job of uncovering new facts and doing it in a trusted way is just an absolutely critical function for society.”

    Zuckerberg and Sandberg have attributed their mistakes to excessive optimism, a blindness to the darker applications of their service. But that explanation ignores their fixation on growth, and their unwillingness to heed warnings. Zuckerberg resisted calls to reorganize the company around a new understanding of privacy, or to reconsider the depth of data it collects for advertisers.


    In barely two years, the mood in Washington had shifted. Internet companies and entrepreneurs, formerly valorized as the vanguard of American ingenuity and the astronauts of our time, were being compared to Standard Oil and other monopolists of the Gilded Age. This spring, the Wall Street Journal published an article that began, “Imagine a not-too-distant future in which trustbusters force Facebook to sell off Instagram and WhatsApp.” It was accompanied by a sepia-toned illustration in which portraits of Zuckerberg, Tim Cook, and other tech C.E.O.s had been grafted onto overstuffed torsos meant to evoke the robber barons. In 1915, Louis Brandeis, the reformer and future Supreme Court Justice, testified before a congressional committee about the dangers of corporations large enough that they could achieve a level of near-sovereignty “so powerful that the ordinary social and industrial forces existing are insufficient to cope with it.” He called this the “curse of bigness.” Tim Wu, a Columbia law-school professor and the author of a forthcoming book inspired by Brandeis’s phrase, told me, “Today, no sector exemplifies more clearly the threat of bigness to democracy than Big Tech.” He added, “When a concentrated private power has such control over what we see and hear, it has a power that rivals or exceeds that of elected government.”

    When I asked Zuckerberg whether policymakers might try to break up Facebook, he replied, adamantly, that such a move would be a mistake. The field is “extremely competitive,” he told me. “I think sometimes people get into this mode of ‘Well, there’s not, like, an exact replacement for Facebook.’ Well, actually, that makes it more competitive, because what we really are is a system of different things: we compete with Twitter as a broadcast medium; we compete with Snapchat as a broadcast medium; we do messaging, and iMessage is default-installed on every iPhone.” He acknowledged the deeper concern. “There’s this other question, which is just, laws aside, how do we feel about these tech companies being big?” he said. But he argued that efforts to “curtail” the growth of Facebook or other Silicon Valley heavyweights would cede the field to China. “I think that anything that we’re doing to constrain them will, first, have an impact on how successful we can be in other places,” he said. “I wouldn’t worry in the near term about Chinese companies or anyone else winning in the U.S., for the most part. But there are all these places where there are day-to-day more competitive situations—in Southeast Asia, across Europe, Latin America, lots of different places.”

    The rough consensus in Washington is that regulators are unlikely to try to break up Facebook. The F.T.C. will almost certainly fine the company for violations, and may consider blocking it from buying big potential competitors, but, as a former F.T.C. commissioner told me, “in the United States you’re allowed to have a monopoly position, as long as you achieve it and maintain it without doing illegal things.”

    Facebook is encountering tougher treatment in Europe, where antitrust laws are stronger and the history of fascism makes people especially wary of intrusions on privacy. One of the most formidable critics of Silicon Valley is the European Union’s top antitrust regulator, Margrethe Vestager.

    In Vestager’s view, a healthy market should produce competitors to Facebook that position themselves as ethical alternatives, collecting less data and seeking a smaller share of user attention. “We need social media that will allow us to have a nonaddictive, advertising-free space,” she said. “You’re more than welcome to be successful and to dramatically outgrow your competitors if customers like your product. But, if you grow to be dominant, you have a special responsibility not to misuse your dominant position to make it very difficult for others to compete against you and to attract potential customers. Of course, we keep an eye on it. If we get worried, we will start looking.”


    As hard as it is to curb election propaganda, Zuckerberg’s most intractable problem may lie elsewhere—in the struggle over which opinions can appear on Facebook, which cannot, and who gets to decide. As an engineer, Zuckerberg never wanted to wade into the realm of content. Initially, Facebook tried blocking certain kinds of material, such as posts featuring nudity, but it was forced to create long lists of exceptions, including images of breast-feeding, “acts of protest,” and works of art. Once Facebook became a venue for political debate, the problem exploded. In April, in a call with investment analysts, Zuckerberg said glumly that it was proving “easier to build an A.I. system to detect a nipple than what is hate speech.”

    The cult of growth leads to the curse of bigness: every day, a billion things were being posted to Facebook. At any given moment, a Facebook “content moderator” was deciding whether a post in, say, Sri Lanka met the standard of hate speech or whether a dispute over Korean politics had crossed the line into bullying. Zuckerberg sought to avoid banning users, preferring to be a “platform for all ideas.” But he needed to prevent Facebook from becoming a swamp of hoaxes and abuse. His solution was to ban “hate speech” and impose lesser punishments for “misinformation,” a broad category that ranged from crude deceptions to simple mistakes. Facebook tried to develop rules about how the punishments would be applied, but each idiosyncratic scenario prompted more rules, and over time they became byzantine. According to Facebook training slides published by the Guardian last year, moderators were told that it was permissible to say “You are such a Jew” but not permissible to say “Irish are the best, but really French sucks,” because the latter was defining another people as “inferiors.” Users could not write “Migrants are scum,” because it is dehumanizing, but they could write “Keep the horny migrant teen-agers away from our daughters.” The distinctions were explained to trainees in arcane formulas such as “Not Protected + Quasi protected = not protected.”

    It will hardly be the last quandary of this sort. Facebook’s free-speech dilemmas have no simple answers—you don’t have to be a fan of Alex Jones to be unnerved by the company’s extraordinary power to silence a voice when it chooses, or, for that matter, to amplify others, to pull the levers of what we see, hear, and experience. Zuckerberg is hoping to erect a scalable system, an orderly decision tree that accounts for every eventuality and exception, but the boundaries of speech are a bedevilling problem that defies mechanistic fixes. The Supreme Court, defining obscenity, landed on “I know it when I see it.” For now, Facebook is making do with a Rube Goldberg machine of policies and improvisations, and opportunists are relishing it. Senator Ted Cruz, Republican of Texas, seized on the ban of Jones as a fascist assault on conservatives. In a moment that was rich even by Cruz’s standards, he quoted Martin Niemöller’s famous lines about the Holocaust, saying, “As the poem goes, you know, ‘First they came for Alex Jones.’ ”

    #Facebook #Histoire_numérique

  • Google erases ’Don’t be evil’ from code of conduct after 18 years | ZDNet

    At some point in the past month, Google removed its famous ’Don’t be evil’ motto from the introduction to its code of conduct.

    As spotted by Gizmodo, the phrase was dropped from the preface of Google’s code of conduct in late April or early May.

    Until then, ’Don’t be evil’ were the first words of the opening and closing sentences of Google’s code of conduct and have been part of it since 2000.

    The phase occasionally guides debate within the company. The 4,000 staff protesting Google’s work for the Pentagon’s AI Project Maven referred to the motto to highlight how the contract conflicted with the company’s values.

    Google’s parent company, Alphabet, also adopted and still retains a variant of the motto in the form of ’Do the right thing’.

    A copy of the Google’s Code of Conduct page from April 21 on the Wayback Machine shows the old version.

    "’Don’t be evil.’ Googlers generally apply those words to how we serve our users. But ’Don’t be evil’ is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally — following the law, acting honorably, and treating co-workers with courtesy and respect.

    "The Google Code of Conduct is one of the ways we put ’Don’t be evil’ into practice. It’s built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.

    “We set the bar that high for practical as well as aspirational reasons: Our commitment to the highest standards helps us hire great people, build great products, and attract loyal users. Trust and mutual respect among employees and users are the foundation of our success, and they are something we need to earn every day.”

    The whole first paragraph has been removed from the current Code of Conduct page, which now begins with:

    "The Google Code of Conduct is one of the ways we put Google’s values into practice. It’s built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.

    “We set the bar that high for practical as well as aspirational reasons: Our commitment to the highest standards helps us hire great people, build great products, and attract loyal users. Respect for our users, for the opportunity, and for each other are foundational to our success, and are something we need to support every day.”

    While the phrase no longer leads Google’s code of conduct, one remnant remains at the end.

    “And remember... don’t be evil, and if you see something that you think isn’t right — speak up.”

    #Google #Histoire_numérique #Motto #Evil