industryterm:web search

  • #serverless #computing — The Truth behind the New Business Trend
    https://hackernoon.com/serverless-computing-the-truth-behind-the-new-business-trend-7426c9fa310

    Serverless Computing — The Truth Behind the New Business TrendCloud computing has evoked a new hope among enterprises and businesses. Most of the entrepreneurs (including me) have begun to worry less about IT infrastructure. Well, all thanks to the Cloud!At present, serverless computing is the most-talked-about topic in this context. In Google Trends, we saw a huge spike in web searches on serverless architecture. A minimum of 100 web searches on serverless architecture occurs on a daily basis.So, why is there a sudden interest in serverless computing? Is it just a fad or indeed a game-changer?Let’s have a close look!What is Serverless Computing?To newbies, the term serverless would probably shock them. Does this mean there are no servers? Yes, and no. In such an architecture, your (...)

    #cloud-services #serverless-computing #serverless-tech

  • Google’s true origin partly lies in CIA and NSA research grants for mass surveillance — Quartz
    https://qz.com/1145669/googles-true-origin-partly-lies-in-cia-and-nsa-research-grants-for-mass-surveill
    https://qzprod.files.wordpress.com/2017/08/rts18wdq-e1502123358903.jpg?quality=80&strip=all&w=1600

    Le titre est un peu « clickbait », mais les infos sont intéressantes, quoique parfois elliptiques.

    C’est écrit par : Jeff Nesbit, Former director of legislative and public affairs, National Science Foundation
    Quelqu’un qui doit savoir de quoi il cause.

    In the mid 1990s, the intelligence community in America began to realize that they had an opportunity. The supercomputing community was just beginning to migrate from university settings into the private sector, led by investments from a place that would come to be known as Silicon Valley.

    The intelligence community wanted to shape Silicon Valley’s efforts at their inception so they would be useful for homeland security purposes. A digital revolution was underway: one that would transform the world of data gathering and how we make sense of massive amounts of information. The intelligence community wanted to shape Silicon Valley’s supercomputing efforts at their inception so they would be useful for both military and homeland security purposes. Could this supercomputing network, which would become capable of storing terabytes of information, make intelligent sense of the digital trail that human beings leave behind?

    Intelligence-gathering may have been their world, but the Central Intelligence Agency (CIA) and the National Security Agency (NSA) had come to realize that their future was likely to be profoundly shaped outside the government. It was at a time when military and intelligence budgets within the Clinton administration were in jeopardy, and the private sector had vast resources at their disposal. If the intelligence community wanted to conduct mass surveillance for national security purposes, it would require cooperation between the government and the emerging supercomputing companies.

    Silicon Valley was no different. By the mid 1990s, the intelligence community was seeding funding to the most promising supercomputing efforts across academia, guiding the creation of efforts to make massive amounts of information useful for both the private sector as well as the intelligence community.

    They funded these computer scientists through an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors. It was called the Massive Digital Data Systems (MDDS) project.
    The Massive Digital Data Systems (MDDS) project

    MDDS was introduced to several dozen leading computer scientists at Stanford, CalTech, MIT, Carnegie Mellon, Harvard, and others in a white paper that described what the CIA, NSA, DARPA, and other agencies hoped to achieve. The research would largely be funded and managed by unclassified science agencies like NSF, which would allow the architecture to be scaled up in the private sector if it managed to achieve what the intelligence community hoped for.

    “Not only are activities becoming more complex, but changing demands require that the IC [Intelligence Community] process different types as well as larger volumes of data,” the intelligence community said in its 1993 MDDS white paper. “Consequently, the IC is taking a proactive role in stimulating research in the efficient management of massive databases and ensuring that IC requirements can be incorporated or adapted into commercial products. Because the challenges are not unique to any one agency, the Community Management Staff (CMS) has commissioned a Massive Digital Data Systems [MDDS] Working Group to address the needs and to identify and evaluate possible solutions.”

    In 1995, one of the first and most promising MDDS grants went to a computer-science research team at Stanford University with a decade-long history of working with NSF and DARPA grants. The primary objective of this grant was “query optimization of very complex queries that are described using the ‘query flocks’ approach.” A second grant—the DARPA-NSF grant most closely associated with Google’s origin—was part of a coordinated effort to build a massive digital library using the internet as its backbone. Both grants funded research by two graduate students who were making rapid advances in web-page ranking, as well as tracking (and making sense of) user queries: future Google cofounders Sergey Brin and Larry Page.

    The research by Brin and Page under these grants became the heart of Google: people using search functions to find precisely what they wanted inside a very large data set. The intelligence community, however, saw a slightly different benefit in their research: Could the network be organized so efficiently that individual users could be uniquely identified and tracked?

    The grants allowed Brin and Page to do their work and contributed to their breakthroughs in web-page ranking and tracking user queries. Brin didn’t work for the intelligence community—or for anyone else. Google had not yet been incorporated. He was just a Stanford researcher taking advantage of the grant provided by the NSA and CIA through the unclassified MDDS program.
    Left out of Google’s story

    The MDDS research effort has never been part of Google’s origin story, even though the principal investigator for the MDDS grant specifically named Google as directly resulting from their research: “Its core technology, which allows it to find pages far more accurately than other search engines, was partially supported by this grant,” he wrote. In a published research paper that includes some of Brin’s pivotal work, the authors also reference the NSF grant that was created by the MDDS program.

    Instead, every Google creation story only mentions just one federal grant: the NSF/DARPA “digital libraries” grant, which was designed to allow Stanford researchers to search the entire World Wide Web stored on the university’s servers at the time. “The development of the Google algorithms was carried on a variety of computers, mainly provided by the NSF-DARPA-NASA-funded Digital Library project at Stanford,” Stanford’s Infolab says of its origin, for example. NSF likewise only references the digital libraries grant, not the MDDS grant as well, in its own history of Google’s origin. In the famous research paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” which describes the creation of Google, Brin and Page thanked the NSF and DARPA for its digital library grant to Stanford. But the grant from the intelligence community’s MDDS program—specifically designed for the breakthrough that Google was built upon—has faded into obscurity.

    Google has said in the past that it was not funded or created by the CIA. For instance, when stories circulated in 2006 that Google had received funding from the intelligence community for years to assist in counter-terrorism efforts, the company told Wired magazine founder John Battelle, “The statements related to Google are completely untrue.”

    Did the CIA directly fund the work of Brin and Page, and therefore create Google? No. But were Brin and Page researching precisely what the NSA, the CIA, and the intelligence community hoped for, assisted by their grants? Absolutely.

    In this way, the collaboration between the intelligence community and big, commercial science and tech companies has been wildly successful. When national security agencies need to identify and track people and groups, they know where to turn – and do so frequently. That was the goal in the beginning. It has succeeded perhaps more than anyone could have imagined at the time.

  • Corporate Surveillance in Everyday Life
    http://crackedlabs.org/en/corporate-surveillance

    Report : How thousands of companies monitor, analyze, and influence the lives of billions. Who are the main players in today’s digital tracking ? What can they infer from our purchases, phone calls, web searches, and Facebook likes ? How do online platforms, tech companies, and data brokers collect, trade, and make use of personal data ?

    #données #sécuritaire #surveillance #BigData #web #profiling

  • Google’s parent Alphabet results hit by rising traffic costs, strong dollar | Reuters
    http://in.reuters.com/article/us-alphabet-results-idINKCN0XI2MZ

    Google’s parent Alphabet Inc (GOOGL.O) missed Wall Street targets for first-quarter profit and revenue on Thursday as it spent more money to build traffic for its mobile advertising services.

    The results, which were also hit by the strong dollar, drove shares of the Web search company down 6 percent in late trading Thursday.

    Alphabet’s consolidated revenue rose to $20.26 billion from $17.26 billion, slightly below the $20.37 billion analyst consensus, according to Thomson Reuters I/B/E/S. Non-GAAP earnings per share of $7.50, excluding one-time items, missed analysts’ expectations of $7.97.

    Chief Financial Officer Ruth Porat said on a conference call with investors that payments to other web sites, known as #traffic_acquisition_costs (#TAC), totaled $3.8 billion and accounted for 21 percent of advertising revenues. The percentage of ad revenues spent on TAC grew 13 percent year-over-year.

    That reflects the ongoing shift to mobile advertising and the growing importance of programmatic advertising, in which ads are bought, sold and displayed by automated systems.

    Investors should get used to seeing increased TAC as “the cost of doing business,” said Sameet Sinha, B. Riley & Co. analyst.

  • “TrackMeNot is a lightweight browser extension that helps protect web searchers from surveillance and data-profiling by search engines. It does so not by means of concealment or encryption (i.e. covering one’s tracks), but instead, paradoxically, by the opposite strategy: noise and obfuscation. [...] background process that periodically issues randomized search-queries to popular search engines, e.g., AOL, Yahoo!, Google, and Bing. It hides users’ actual search trails in a cloud of ’ghost’ queries, significantly increasing the difficulty of aggregating such data into accurate or identifying user profiles.”

    http://www.cs.nyu.edu/trackmenot

    Note that many network protocols have provisions to add random noise to defeat some types of privacy attacks but they are not always activated.

    #privacy #google #PRISM #search_engines

  • The National Security Agency’s monitoring of Americans includes customer records from the three major phone networks as well as emails and Web searches, and the agency also has cataloged credit-card transactions, said people familiar with the agency’s activities.

    http://online.wsj.com/article/SB10001424127887324299104578529112289298922.html

    En sachant par ailleurs que divers services étrangers, donc ceux du Royaume-Uni, disposent des mêmes services

  • Unreported Side Effects of Drugs Are Found Using Internet Search Data, Study Finds - NYTimes.com
    http://www.nytimes.com/2013/03/07/science/unreported-side-effects-of-drugs-found-using-internet-data-study-finds.html

    Using automated software tools to examine queries by six million Internet users taken from Web search logs in 2010, the researchers looked for searches relating to an antidepressant, paroxetine, and a cholesterol lowering drug, pravastatin. They were able to find evidence that the combination of the two drugs caused high blood sugar.

    The study, which was reported in the Journal of the American Medical Informatics Association on Wednesday, is based on data-mining techniques similar to those employed by services like Google Flu Trends, which has been used to give early warning of the prevalence of the sickness to the public.

    L’article original ( abstract seulement) Web-scale pharmacovigilance: listening to signals from the crowd http://jamia.bmj.com/content/early/2013/02/05/amiajnl-2012-001482.abstract

    He turned to computer scientists at Microsoft, who created software for scanning anonymized data collected from a software toolbar installed in Web browsers by users who permitted their search histories to be collected. The scientists were able to explore 82 million individual searches for drug, symptom and condition information.

    The researchers first identified individual searches for the terms paroxetine and pravastatin, as well as searches for both terms, in 2010. They then computed the likelihood that users in each group would also search for hyperglycemia as well as roughly 80 of its symptoms — words or phrases like “high blood sugar” or “blurry vision.”

    They determined that people who searched for both drugs during the 12-month period were significantly more likely to search for terms related to hyperglycemia than were those who searched for just one of the drugs. (About 10 percent, compared with 5 percent and 4 percent for just one drug.)

    (…)

    The researchers said they were surprised by the strength of the “signal” that they detected in the searches and argued that it would be a valuable tool for the F.D.A. to add to its current system for tracking adverse effects.

    (…)

    “I think there are tons of drug-drug interactions — that’s the bad news,” Dr. Altman said. “The good news is we also have ways to evaluate the public health impact.

  • A Victory for Google as F.T.C. Takes No Formal Steps - NYTimes.com

    http://www.nytimes.com/2013/01/04/technology/google-agrees-to-changes-in-search-ending-us-antitrust-inquiry.html?nl=toda

    By EDWARD WYATT Published: January 3, 2013 66 Comments

    WASHINGTON — The Federal Trade Commission on Thursday handed Google a major victory by declaring, after an investigation of nearly two years, that the company had not violated antitrust or anticompetition statutes in the way it arranges its Web search results.

    By allowing Google to continue to present search results that highlight its own services, the F.T.C. decision could enable Google to further strengthen its already dominant position on the Internet.

    #google #internet #réseaux-sociaux

  • Google records show book scanning was aimed at Amazon — paidContent
    http://paidcontent.org/2012/08/06/google-records-show-book-scanning-was-aimed-at-amazon

    The filing points to internal Google documents in an attempt to show that the scanning was an overtly commercial project, and that the scanning was not a fair use as Google is claiming.

    In a 2003 internal Google presentation described in the filing, the company stated “[we want web searchers interested in book content to come to Google not Amazon.”

  • Could the Net be killing the planet one web search at a time?
    http://www.vancouversun.com/business/Could%20killing%20planet%20search%20time/4891461/story.html

    despite the web’s green promise, this explosion of data has turned the Internet into one of the planet’s fastest-growing sources of carbon emissions. The Internet now consumes two to three per cent of the world’s electricity.

    If the Internet was a country, it would be the planet’s fifth-biggest consumer of power, ahead of India and Germany. The Internet’s power needs now rival those of the aviation industry and are expected to nearly double by 2020.

    "The #Internet pollutes, but people don’t understand why it pollutes.

    #énergie #pollution

  • ’Chinese hackers’ break into Gmail accounts - Americas - Al Jazeera English
    http://english.aljazeera.net//news/americas/2011/06/20116205619120217.html

    Hackers likely based in China have attempted to break into hundreds of Google mail accounts, including those of senior US government officials, Chinese activists and journalists, the Internet company said.

    The unknown perpetrators, who appeared to originate from Jinan in Shandong province, recently tried to breach and monitor email accounts by stealing passwords, but Google detected and “disrupted” their campaign, the world’s largest Web search company said on its official blog on Thursday.