position:computer scientist

  • Training a single AI model can emit as much carbon as five cars in their lifetimes - MIT Technology Review

    In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

    It’s a jarring quantification of something AI researchers have suspected for a long time. “While probably many of us have thought of this in an abstract, vague level, the figures really show the magnitude of the problem,” says Carlos Gómez-Rodríguez, a computer scientist at the University of A Coruña in Spain, who was not involved in the research. “Neither I nor other researchers I’ve discussed them with thought the environmental impact was that substantial.”

    They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model’s final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-American flight.

    What’s more, the researchers note that the figures should only be considered as baselines. “Training a single model is the minimum amount of work you can do,” says Emma Strubell, a PhD candidate at the University of Massachusetts, Amherst, and the lead author of the paper. In practice, it’s much more likely that AI researchers would develop a new model from scratch or adapt an existing model to a new data set, either of which can require many more rounds of training and tuning.

    The significance of those figures is colossal—especially when considering the current trends in AI research. “In general, much of the latest research in AI neglects efficiency, as very large neural networks have been found to be useful for a variety of tasks, and companies and institutions that have abundant access to computational resources can leverage this to obtain a competitive advantage,” Gómez-Rodríguez says. “This kind of analysis needed to be done to raise awareness about the resources being spent [...] and will spark a debate.”

    “What probably many of us did not comprehend is the scale of it until we saw these comparisons,” echoed Siva Reddy, a postdoc at Stanford University who was not involved in the research.
    The privatization of AI research

    The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research.

    #Intelligence_artificielle #Consommation_énergie #Empreinte_carbone

  • The Five Most Historically Significant Virtual Characters

    This is part of a series I’m writing to celebrate the release of my book, The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are In A Video Game, on the 20th anniversary of the Matrix. See www.zenentrepreneur.com for more information. A version of this article first appeared in Variety here.Virtual characters are all the rage lately. At the Grammy’s this year, Google showed off an AR dancing version of “Childish Gambino” (aka Donald Glover) as part of its AR Playground. Using the app, you can use the camera and augmented reality you can see the performer dancing in real world settings. But virtual characters have been around in movies, TV shows, video games for many years. Virtual YouTube characters, known as virtual influencers (...)

    #virtual-reality #hackernoon-top-story #the-matrix #virtual-character #science-fiction

  • YouTube Executives Ignored Warnings, Let Toxic Videos Run Rampant - Bloomberg

    Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than $16 billion a year. But on that day, Wojcicki compared her video site to a different kind of institution. “We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”

    Since Wojcicki took the stage, prominent conspiracy theories on the platform—including one on child vaccinations; another tying Hillary Clinton to a Satanic cult—have drawn the ire of lawmakers eager to regulate technology companies. And YouTube is, a year later, even more associated with the darker parts of the web.

    The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

    Mais justement NON ! Ce ne peut être une “bibliothèque”, car une bibliothèque ne conserve que des documents qui ont été publiés, donc avec déjà une première instance de validation (ou en tout cas de responsabilité éditoriale... quelqu’un ira en procès le cas échéant).

    YouTube est... YouTube, quelque chose de spécial à internet, qui remplit une fonction majeure... et également un danger pour la pensée en raison de “l’économie de l’attention”.

    The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

    In response to criticism about prioritizing growth over safety, Facebook Inc. has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. Within Google, YouTube’s inability to fix its problems has remained a major gripe. Google shares slipped in late morning trading in New York on Tuesday, leaving them up 15 percent so far this year. Facebook stock has jumped more than 30 percent in 2019, after getting hammered last year.

    YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.

    So YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.

    “They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

    YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention. It’s one that people on the political fringes have easily exploited, said Brittan Heller, a fellow at Harvard University’s Carr Center. “They don’t know how the algorithm works,” she said. “But they do know that the more outrageous the content is, the more views.”

    People inside YouTube knew about this dynamic. Over the years, there were many tortured debates about what to do with troublesome videos—those that don’t violate its content policies and so remain on the site. Some software engineers have nicknamed the problem “bad virality.”

    Yonatan Zunger, a privacy engineer at Google, recalled a suggestion he made to YouTube staff before he left the company in 2016. He proposed a third tier: Videos that were allowed to stay on YouTube, but, because they were “close to the line” of the takedown policy, would be removed from recommendations. “Bad actors quickly get very good at understanding where the bright lines are and skating as close to those lines as possible,” Zunger said.

    His proposal, which went to the head of YouTube policy, was turned down. “I can say with a lot of confidence that they were deeply wrong,” he said.

    Rather than revamp its recommendation engine, YouTube doubled down. The neural network described in the 2016 research went into effect in YouTube recommendations starting in 2015. By the measures available, it has achieved its goal of keeping people on YouTube.

    “It’s an addiction engine,” said Francis Irving, a computer scientist who has written critically about YouTube’s AI system.

    Wojcicki and her lieutenants drew up a plan. YouTube called it Project Bean or, at times, “Boil The Ocean,” to indicate the enormity of the task. (Sometimes they called it BTO3 – a third dramatic overhaul for YouTube, after initiatives to boost mobile viewing and subscriptions.) The plan was to rewrite YouTube’s entire business model, according to three former senior staffers who worked on it.

    It centered on a way to pay creators that isn’t based on the ads their videos hosted. Instead, YouTube would pay on engagement—how many viewers watched a video and how long they watched. A special algorithm would pool incoming cash, then divvy it out to creators, even if no ads ran on their videos. The idea was to reward video stars shorted by the system, such as those making sex education and music videos, which marquee advertisers found too risqué to endorse.

    Coders at YouTube labored for at least a year to make the project workable. But company managers failed to appreciate how the project could backfire: paying based on engagement risked making its “bad virality” problem worse since it could have rewarded videos that achieved popularity achieved by outrage. One person involved said that the algorithms for doling out payments were tightly guarded. If it went into effect then, this person said, it’s likely that someone like Alex Jones—the Infowars creator and conspiracy theorist with a huge following on the site, before YouTube booted him last August—would have suddenly become one of the highest paid YouTube stars.

    In February of 2018, the video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources. YouTube management rejected the proposal, according to a person with knowledge of the event. The person didn’t know the reasoning behind the rejection, but noted that YouTube was then intent on accelerating its viewing time for videos related to news.

    #YouTube #Economie_attention #Engagement #Viralité

  • The 10 Computer Scientists That Made Computers Mainstream

    These are scientists that made a significant contribution to the field and will be forever remembered for their work.Here are 10 Computer Scientists who made history.1. Alan TuringAlan Turing is an English computer scientist, widely considered to be the father of computer #science. The prestigious “Turing Award” was named after him — an award given to those in computer science who make a significant contribution to the industry. Turing worked for the British Government, playing a pivotal role in cracking intercepted coded messages and enabling the Allies to defeat the Nazis in many crucial engagements. Despite the sheer brilliance of his work, he was not fully recognised for his contributions as he was a homosexual, which was illegal in the UK at the time.Alan Turing’s biography2. Tim (...)

    #technology #computer-science #programming #tech

  • Are We Already in the #matrix ?

    Science and the Simulation Hypothesis point to many reasons we may already be in the Matrixhttps://medium.com/media/c098620bbb858be917822d1c0c9d6266/hrefNote: This is one in a series of articles for the 20th anniversary of the release of The Matrix, and the release of my new book, , The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics, and Eastern Mystics All Agree We Are In a Video Game. Here, I’ll review some of the scientific reasons why this may be the case. A version of this article was originally published on scientificinquirer.com.From Science Fiction to ScienceThis year on March 31 marks the 20th anniversary of the release of the groundbreaking film, The Matrix and the release of my new book, The Simulation Hypothesis. The Matrix was influential in (...)

    #are-we-in-the-matrix #hackernoon-top-story #in-the-matrix

  • How will Security Tokens fare in 2019?

    But first, shoutouts to our investors of the week: Louis Lebbos, Faizan Khan, Antoine Tardif, & Rizwan Virk!As security tokens continue their march towards tokenizing economies, Hackernoon presents to you its top stories on security tokens.Hey, Utsav Jaiswal here, Hacker Noon’s new blockchain editor. Before we dive into these security tokens stories, we’d like to give a huge congratulations to our contributor & investor Riz Virk on the upcoming release of his book: The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics All Agree We Are In a Video Game (available March 31). Riz has been a long time Hackernoon contributing writer, and we recommend pre-ordering his book today. You can read more about Riz in this week’s Vice story: How (...)

    #hacker-hodl #security-tokenization #security-token #hackernoon-letter #sto

  • Tech suffers from lack of humanities, says Mozilla head | Technology | The Guardian

    Mitchell Baker, head of the Mozilla Foundation, has warned that hiring employees who mainly come from Stem – science, technology, engineering and maths – will produce a new generation of technologists with the same blindspots as those who are currently in charge, a move that will “come back to bite us”.

    “Stem is a necessity, and educating more people in Stem topics clearly critical,” Baker told the Guardian. “Every student of today needs some higher level of literacy across the Stem bases.

    “But one thing that’s happened in 2018 is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result. It crystallised for me that if we have Stem education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of Stem to society or humans or life.”

    “We need to be adding not social sciences of the past, but something related to humanity and how to think about the effects of technology on humanity – which is partly sociology, partly anthropology, partly psychology, partly philosophy, partly ethics … it’s some new formulation of all of those things, as part of a Stem education,” Baker told the Guardian.

    “Otherwise we’ll have ourselves to blame, for generations of technologists who don’t even have the toolsets to add these things in.”

    Kathy Pham, the computer scientist at Mozilla who is leading the challenge, said “Students of computer science go on to be the next leaders and creators in the world, and must understand how code intersects with human behaviour, privacy, safety, vulnerability, equality, and many other factors.

    “Just like how algorithms, data structures, and networking are core computer science classes, we are excited to help empower faculty to also teach ethics and responsibility as an integrated core tenet of the curriculum.”

    #Mozilla #Développeurs #Education #Université #Humanités

  • New studies show how easy it is to identify people using genetic databases - STAT

    n recent months, consumer genealogy websites have unleashed a revolution in forensics, allowing law enforcement to use family trees to track down the notorious Golden State Killer in California and solve other cold cases across the country. But while the technique has put alleged killers behind bars, it has also raised questions about the implications for genetic privacy.

    According to a pair of studies published Thursday, your genetic privacy may have already eroded even further than previously realized.

    In an analysis published in the journal Science, researchers used a database run by the genealogy company MyHeritage to look at the genetic information of nearly 1.3 million anonymized people who’ve had their DNA analyzed by a direct-to-consumer genomics company. For nearly 60 percent of those people, it was possible to track down someone whose DNA was similar enough to indicate they were third cousins or closer in relation; for another 15 percent of the samples, second cousins or closer could be found.

    Yaniv Erlich, the lead author on the Science paper, said his team’s findings should prompt regulators and others to reconsider the assumption that genetic information is de-identified. “It’s really not the case. At least technically, it seems feasible to identify some significant part of the population” with such investigations, said Erlich, who’s a computer scientist at Columbia University and chief science officer at MyHeritage.

    The Science paper counted 12 cold cases that were solved between April and August of this year when law enforcement turned to building family trees based on genetic data; a 13th case was an active investigation.

    The most famous criminal identified this way: the Golden State Killer, who terrorized California with a series of rapes and murders in the 1970s and 1980s. With the help of a genetic genealogist, investigators uploaded a DNA sample collected from an old crime scene to a public genealogy database, built family trees, and tracked down relatives. They winnowed down their list of potential suspects to one man with blue eyes, and in April, they made the landmark arrest.

    To crack that case, the California investigators used GEDmatch, an online database that allows people who got their DNA analyzed by companies like 23andMe and Ancestry to upload their raw genetic data so that they can track down distant relatives. MyHeritage’s database — which contains data from 1.75 million people, mostly Americans who’ve gotten their DNA analyzed by MyHeritage’s genetic testing business — works similarly, although it explicitly prohibits forensic searches. (23andMe warns users about the privacy risks of uploading their genetic data to such third party sites.)

    “For me, these articles are fascinating and important and we shouldn’t shy away from the privacy concerns that these articles raise. But at the same time, we should keep in mind the personal and societal value that we believe that we are accruing as we make these large collections,” said Green, who was not involved in the new studies and is an adviser for genomics companies including Helix and Veritas Genetics.

    He pointed to the potential of genomics not only to reunite family members and put criminals behind bars, but also to predict and prevent heritable diseases and develop new drugs.

    As with using social media and paying with credit cards online, reaping the benefits of genetic testing requires accepting a certain level of privacy risk, Green said. “We make these tradeoffs knowing that we’re trading some vulnerability for the advantages,” he said.

    #Génomique #ADN #Vie_privée

  • Delete Your Account Now: A Conversation with Jaron Lanier (https://...

    Delete Your Account Now: A Conversation with Jaron Lanier

    Harper Simon asks Jaron Lanier about his latest book, “Ten Arguments for Deleting Your Social Media Accounts Right Now.”

    HN Discussion: https://news.ycombinator.com/item?id=18189958 Posted by prostoalex (karma: 66116) Post stats: Points: 73 - Comments: 36 - 2018-10-10T23:50:47Z

    #HackerNews #account #conversation #delete #jaron #lanier #now #with #your

    Article content:

    OCTOBER 8, 2018

    JARON LANIER IS ONE of the leading philosophers of the digital age, as well as a computer scientist and avant-garde composer. His previous books include Dawn of the New Everything: Encounters with Reality and Virtual Reality, Who Owns the Future?, and the seminal You Are Not a Gadget: A Manifesto. His latest book bears a self-explanatory (...)

  • Is Computer Programming a Form of Art?

    “Since the publication of “The Art of Computer programming” by Donald E. Knuth in 1968, the notion that programs can be considered works of art is familiar to computer scientist, but the general public has taken little notice of such works of art. For example, there are no art reviews where computer programs are presented and evaluated based on their artistic value. This paper, written for the occasion of Donald Knuth’s 80th birthday, attempts to fill this gap. It presents and evaluates three small programs. The selection reflects the personal preferences and taste of the author.”

  • Out in the Open: Take Back Your Privacy With #Briar (title edited) | WIRED

    You and your contacts keep complete control your data, but you needn’t setup your own computer server in order to do so. Plus, you can send messages without even connecting to the internet. Using Briar, you can send messages over Bluetooth, a shared WiFi connection, or even a shared USB stick. That could be a big advantage for people in places where internet connections are unreliable, censored, or non-existent.

    Briar is the work of computer scientist Michael Rogers, security expert Eleanor Saitta, interaction designer Bernard Tyers, software engineer Ximin Luo, and a few other volunteers.

    #privacy #communication #encryption

  • The Most Important Object In Computer Graphics History Is This Teapot - Facts So Romantic

    Let’s play a game. I’ll show you a picture and a couple videos—just watch the first five seconds or so—and you figure out what they have in common. Ready? Here we go:Microsoft Windows “Pipes” screensaver.Daniel Kufer/Youtube Did you spot it? Each of them depicts the exact same object: a shiny, slightly squashed-looking teapot.You may not have thought much of it if you saw it in that episode of The Simpsons, in Toy Story, in your old PC screensaver, or in any of the other films and games it’s crept into over the years. Yet this unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah. The U of U was a powerhouse of computer (...)

  • Algorithms should be regulated for safety like cars, banks, and drugs, says computer scientist Ben Shneiderman — Quartz

    When these programs are wrong—like when Facebook mistakes you for your sibling or even your mom—it’s hardly a problem. In other situations, though, we give artificial intelligence much more responsibility, with larger consequences when it inevitably backfires.

    Ben Shneiderman, a computer scientist from the University of Maryland, thinks the risks are big enough that it’s time to for the government to get involved. In a lecture on May 30 to the Alan Turing Institute in London, he called for a “National Algorithm Safety Board,” similar to the US’s National Transportation Safety Board for vehicles, which would provide both ongoing and retroactive oversight for high-stakes algorithms.

    “When you go to systems which are richer in complexity, you have to adopt a new philosophy of design,” Shneiderman argued in his talk. His proposed National Algorithm Safety Board, which he also suggested in an article in 2016, would provide an independent third party to review and disclose just how these programs work. It would also investigate algorithmic failures and inform the public about them—much like bank regulators report on bank failures, transportation watchdogs look into major accidents, and drug licensing bodies look out for drug interactions or toxic side-effects. Since “algorithms are increasingly vital to national economies, defense, and healthcare systems,” Shneiderman wrote, “some independent oversight will be helpful.”

    On est proche de la proposition de ETC Group pour un Office of assesment of technology. Il ya quelque chose à creuser pour redonner un sens collectif à la fuite en avant technologique (oiu plutôt l’hubris technologique).

    #algorithmes #politique_numérique #intelligence_artificielle

  • “Look at this,” he says and shows me how, before the US election, hundreds upon hundreds of websites were set up to blast out just a few links, articles that were all pro-Trump. “This is being done by people who understand information structure, who are bulk buying domain names and then using automation to blast out a certain message. To make Trump look like he’s a consensus.”

    Robert Mercer: the big data billionaire waging war on mainstream media
    With links to Donald Trump, Steve Bannon and Nigel Farage, the rightwing US computer scientist is at the heart of a multimillion-dollar propaganda network.
    Carole Cadwalladr, The Guardian, on Feb. 26, 2017

  • A Computer Just Clobbered Four Pros At Poker | FiveThirtyEight

    About three weeks ago, I was in a Pittsburgh casino for the beginning of a 20-day man-versus-machine poker battle. Four top human pros were beginning to take on a state-of-the-art artificial intelligence program running on a brand new supercomputer in a game called heads-up no-limit Texas Hold ’em. The humans’ spirits were high as they played during the day and dissected the bot’s strategy over short ribs and glasses of wine late into the evening.

    On Monday evening, however, the match ended and the human pros were in the hole about $1.8 million. For some context, the players (four men and the machine, named Libratus) began each of the 120,000 hands with $20,000 in play money, and posted blinds of $50 and $100.


    Tuomas Sandholm, a Carnegie Mellon computer scientist who created the program with his Ph.D. student Noam Brown, was giddy last week on the match’s livestream, at one point cheering for his bot as it turned over a full house versus human pro Jason Les’s flush in a huge pot, and proudly comparing Libratus’s triumph to Deep Blue’s monumental win over Garry Kasparov in chess.

    And, indeed, some robot can now etch heads-up no-limit Texas Hold ‘em (2017) alongside checkers (1995), chess (1997), Othello (1997), Scrabble (c. 2006), limit Hold ‘em (2008), Jeopardy! (2011) and Go (2016) into the marble cenotaph of human-dominated intellectual pursuits.

    Brown told me that he was keen to tackle other versions of poker with his A.I. algorithms. What happens when a bot like this sits down at a table with many other players, rather than just a one-on-one foe, for example? Sandholm, on the other hand, is quick to say that this isn’t really about poker at all. “The AI’s algorithms are not for poker: they are game independent,” his daily email updates read. The other “games” the algorithms may be applied to in the future: “negotiation, cybersecurity, military setting, auctions, finance, strategic pricing, as well as steering evolution and biological adaptation.”

    #transhumanisme #singularité #jeux

  • Why Did Obama Just Honor Bug-free Software? - Facts So Romantic

    The Presidential Medal of Freedom, America’s highest civilian honor, is usually associated with famous awardees—people like Bruce Springsteen, Stephen Hawking, and Sandra Day O’Connor. So as a computer scientist, I was thrilled to see one of this year’s awards go to a lesser-known pioneer: one Margaret Hamilton. You might call Hamilton the founding mother of software engineering. In fact, she coined the very term. She concluded that the way forward was rigorously specified design, an approach that still underpins many modern software engineering techniques—“design by contract” and “statically typed” programming languages, for example. But not all engineers are on board with her vision. Hamilton’s approach represents just one side of a long-standing tug-of-war over the “right way” to develop (...)

  • Apollo code developer Margaret Hamilton receives Presidential Medal of Freedom | MIT News

    Margaret H. Hamilton, a pioneering computer scientist and former head of the Software Engineering Division of MIT’s Instrumentation Laboratory who led the development of on-board flight software for NASA’s Apollo moon missions, has been awarded the Presidential Medal of Freedom.

    Hamilton, who also spent time as a computer scientist at MIT Lincoln Laboratory before starting her own software company, was honored for her contributions “to concepts of asynchronous software, priority scheduling and priority displays, and human-in-the-loop decision capability, which set the foundation for modern, ultra-reliable software design and engineering.”

    The Presidential Medal of Freedom is the nation’s highest civilian honor, presented by the sitting president to individuals who have made especially meritorious contributions to the national interests of the United States, to world peace, or to cultural or other significant public or private endeavors.

  • Seymour Papert, computer scientist, born 29 February 1928 ; died 31 July 2016 | The Guardian

    Child’s play had been considered largely inconsequential, but Piaget saw that it was an essential part of a child’s cognitive development. Children were “learning by doing”. Today’s educational toy industry started from there.

    Papert understood that mathematics was abstract and theoretical, and that was how it was taught to children. That was why most of them did not understand it. The answer, he thought, was to give children a physical way to think of mathematical ideas.

    #jeu #enseignement #informatique #interactivité #matérialisation #pionniers #logo #lego

    (suis preneur d’un texte plus intéressant)

  • Blockchain & Startups

    • Why Innovative Companies Are Using The Blockchain


    Every financial institution in the world has some sort of internal R&D effort aimed at understanding how the Blockchain will effect their business.

    • Why Companies Like Orange Silicon Valley Are Working With Private Blockchain Startups


    “Although permission is restricted with a private blockchain, they are still decentralized, without a singular authority. The blockchain is used to establish trust between partners and to remove or reduce the role of a clearing house. Companies can create different assets, specify transaction speed, impose privacy requirements and decide who may take part. Shared between members of a consortium, private enterprise blockchains can be used to build trust and efficiency between partners.

    Companies would prefer to do without some of the baggage of Bitcoin; for example, the high energy costs of mining. Private blockchains can be tailored for the specific use cases of enterprises and their partners. By analogy, the public internet benefited from the development of the intranets, their private counterpart, and vice versa.”


    Public blockchains are for customer facing applications that formerly involved a trusted third party. Private blockchains are for use within a single entity, or within an industry consortium where members want to be transparent with each other, but there is no need for public transparency.

    Don’t get lost in the scrum of competing blockchain protocols. Given the frenzy of attention in this area, today’s performance and security problems are transient. We are at a place similar to where client-server computing was in the 1990s, facing some growing pains, but about to take the world by storm.

    • Bitcoin Or Ethereal, Which Blockchain Is Right For Your Startup?


    Five years after Bitcoin’s release, prodigy Vitalik Buterin created Ethereum, the most notable of the second generation blockchains. Buterin approved of the presence of scripting features in Bitcoin but he saw that they were very limited. Ethereum provides a Turing complete computing environment in its blockchain, which is the computer scientist’s way of saying that it includes a full featured programming language. You can write a program in Solidity, the Python-like language of Ethereal, release it into the blockchain, and it’ll run on whatever Ethereum node is handy when conditions trigger its execution. That doesn’t sound like much, but it’s the foundation of workable smart contracts, the thing that enabled the creation of The DAO, and which will permit all sorts of financial innovation going forward. While this can theoretically be done on the Bitcoin blockchain, smart contracts are far more streamlined on Ethereal, which was built specifically for this use case.

    #bitcoin #blockchain #ethereal #ethereum #The_DAO #DAO #Bitfinex

  • The Most Important Object In Computer Graphics History Is This Teapot - Facts So Romantic

    Let’s play a game. I’ll show you a picture and a couple videos—just watch the first five seconds or so—and you figure out what they have in common. Ready? Here we go:Microsoft Windows “Pipes” screensaver.Daniel Kufer/Youtube Did you spot it? Each of them depicts the exact same object: a shiny, slightly squashed-looking teapot.You may not have thought much of it if you saw it in that episode of The Simpsons, in Toy Story, in your old PC screensaver, or in any of the other films and games it’s crept into over the years. Yet this unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah. The U of U was a powerhouse of computer (...)

  • The collaboration curse

    A growing body of academic evidence demonstrates just how serious the problem is. Gloria Mark of the University of California, Irvine, discovered that interruptions, even short ones, increase the total time required to complete a task by a significant amount. A succession of studies have shown that multitasking reduces the quality of work as well as dragging it out. Sophie Leroy, formerly of the University of Minnesota (now at the University of Washington Bothell) has added an interesting twist to this argument: jumping rapidly from one task to another also reduces efficiency because of something she calls “attention residue”. The mind continues to think about the old task even as it jumps to a new one.

    A second objection is that, whereas managers may notice the benefits of collaboration, they fail to measure its costs. Rob Cross and Peter Gray of the University of Virginia’s business school estimate that knowledge workers spend 70-85% of their time attending meetings (virtual or face-to-face), dealing with e-mail, talking on the phone or otherwise dealing with an avalanche of requests for input or advice. Many employees are spending so much time interacting that they have to do much of their work when they get home at night. Tom Cochran, a former chief technology officer of Atlantic Media, calculated that the midsized firm was spending more than $1m a year on processing e-mails, with each one costing on average around 95 cents in labour costs. “A free and frictionless method of communication,” he notes, has “soft costs equivalent to procuring a small company Learjet.”

    Mark Bolino of the University of Oklahoma points to a hidden cost of collaboration. Some employees are such enthusiastic collaborators that they are asked to weigh in on every issue. But it does not take long for top collaborators to become bottlenecks: nothing happens until they have had their say—and they have their say on lots of subjects that are outside their competence.

    The biggest problem with collaboration is that it makes what Mr Newport calls “deep work” difficult, if not impossible. Deep work is the killer app of the knowledge economy: it is only by concentrating intensely that you can master a difficult discipline or solve a demanding problem. Many of the most productive knowledge workers go out of their way to avoid meetings and unplug electronic distractions. Peter Drucker, a management thinker, argued that you can do real work or go to meetings but you cannot do both. Jonathan Franzen, an author, unplugs from the internet when he is writing. Donald Knuth, a computer scientist, refuses to use e-mail on the ground that his job is to be “on the bottom of things” rather than “on top of things”. Richard Feynman, a legendary physicist, extolled the virtues of “active irresponsibility” when it came to taking part in academic meetings.

  • Lauri Love

    Lauri Love is a computer scientist from Stradishall in the UK who has a long history of involvement in political activism. A dual UK-Finnish national on his mother’s side, Lauri registered as a conscientious objector in Finland for his national service in 2009, before he enrolled on a degree in Computer Science and Physics at Glasgow University.

    Anti-austerity activism

    A wave of anti-austerity protests swept across the UK in 2010-2012, starting with the biggest student protests in a generation in late 2010 and running through to the Occupy movement, which reached the UK roughly a year later.


    2013 arrest and bail

    In October 2013, the United States government unsealed three indictments against Lauri Love, for alleged hacks of governmental agencies.

    #objecteurs_de_conscience #anonymous #répression #hackers #prisonniers_politiques (dans ce cas, en lutte contre une procédure d’#extradition)

  • Here’s Why Most Neuroscientists Are Wrong About the Brain - Facts So Romantic

    Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons. But experimental results published last year, from a lab at Lund University in Sweden, hint that we need to change our approach. They suggest the brain learns in a way more analogous to that of a computer: It encodes information into molecules inside neurons and reads out that information for use in computational operations.Gary Waters/Getty Images With a computer scientist, Adam King, I co-authored a book, Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. We argued that well-established results in cognitive science and computer science imply that computation in the brain must resemble computation in a (...)

  • “So with her input, I rewrote the book with a slightly different spin. (I also kept her as a “computer engineer” even though she’s really more of a computer scientist, software developer, etc.) I hope you like this new narrative better, too!”


    #Barbie #feminism #computing