The Artificial Intelligence Revolution : Part 1 – Wait but Why
The Artificial Intelligence Revolution : Part 1 – Wait but Why
Musings on HTTP/2 and Bundling
“If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it’s not going to be a big deal.”
The week commencing 12 April 2015 saw what is believed to be the largest loss of life at sea in the recent history of the Mediterranean. On 12 April, 400 people died when an overcrowded boat capsized due to its passengers’ excitement at the sight of platform supply vessels approaching to rescue them. Less than a week later, on 18 April, a similar incident took an even greater toll in human lives, leading the deadliest single shipwreck recorded by the United Nations’ High Commissioner for Refugees (UNHCR) in the Mediterranean. Over 800 people are believed to have died when a migrants’ vessel sank after a mis-manoeuvre led it to collide with a cargo ship that had approached to rescue its passengers. More than 1,200 lives were thus lost in a single week. As Médecins Sans Frontiers (MSF) commented at the time, these figures eerily resemble those of a war zone.
Uno studio dimostra che con la fine di Mare nostrum muoiono più migranti
A un anno esatto dal naufragio del 18 aprile 2015, costato la vita a circa 800 migranti, uno studio approfondito ricostruisce i fatti e getta una luce sinistra sulle istituzioni europee – in particolare su Frontex, l’agenzia per il controllo delle frontiere esterne.
ça me rassure, du coup... mais pourquoi il n’y a pas le petit triangle plein avant l’adresse URL, qui signale généralement si le lien a déjà été cité sur seenthis ?
D’ailleurs cet article a déjà été cité par moi-même comme commentaire au post, mais là non plus, pas de triangle plein
@seenthis : any idea ?
EU-UK naval mission on people-smuggling led to more deaths, report says
The peers say an unintended consequence of Operation Sophia’s policy of destroying smugglers’ boats has been that they have adapted and sent refugees and migrants to sea in unseaworthy vessels, leading to more deaths.
Ici le #rapport:
Operation Sophia saves lives but has not stopped people smuggling
The EU External Affairs Sub-Committee today publishes a report on the EU’s naval mission in the Mediterranean, Operation Sophia. This report concludes that it has failed in its mission to disrupt the business of people smuggling in the central Mediterranean.
Operation Sophia : a failed mission
EU-UK naval mission on people-smuggling led to more deaths, report says
House of Lords inquiry finds operation failed in objectives and has had little impact on the flow of irregular migrants
Let’s build together — NZZ’s storytelling toolbox Q is now open source
Here are some key features:
– Simple user interface and workflows, designed for people with no specific expertise in data visualisation, to facilitate broad usage in newsrooms.
– The same workflow from creation to publication for all tools, making it easy for users to understand new tools.
– Searchable archive of all items created with any of the tools, so that everything can be easily re-used, edited and used as blueprints.
– The same item can be rendered in different ways and designs for different targets (server side rendered, client side rendered, raster image, svg, you name it)
– The data for the graphics is stored in a database and rendered on runtime (with heavy caching), making sure you always have the latest version everywhere it’s embedded, with no additional effort.
– Q server, Q editor and tool services are decoupled and communicate via HTTP. This allows you to use whatever technologies fit your needs best.
Tools define the editor form using JSON schema with some extensions, making it very easy to set up new tools.
The tools currently available are:
– Election results: Votes for people
– Election results: Votes for parties
– Election results: Seats won by parties
Passive Fingerprinting of HTTP/2 Clients
"This paper demonstrates how these new implementations create small nuances, which differentiate HTTP/2 clients from one another. In addition, we have shown how these unique implementation features can be leveraged to passively fingerprint web clients. Our research shows that passive HTTP/2 client fingerprinting can be used to deduce the true details about the client’s implementation — for example, browser type, version, and sometimes even the operating system. This technique can be used to better detect clients that spoof or don’t report their User-Agent string, and at the same time increase confidence in User-Agent strings reported by legitimate (...)
Necurs += DDoS
World’s largest spam botnet (5 million bots) adds proxy module with DDoS features, but will it really be used that way?
Necurs is a malware that is mainly known for sending large spam campaigns, most notably the Locky ransomware. However, Recurs is not only a spambot, it is a modular piece of malware that is composed of a main bot module, a userland rootkit and it can dynamically load additional modules.
At first look, it seemed to be a simple SOCKS/HTTP proxy module, but as we looked at the commands the bot would accept from the C2 [port 5222] we realised that there was an additional command, that would cause the bot to start making HTTP or UDP requests to an arbitrary target in an endless loop, in a way that could only be explained as a DDOS attack.
Please notice that we have not seen Recurs being used for DDOS attacks, we simply saw that it has that capability in one of the modules that it has been loading
The rest of their post contains the results of a technical analysis of this module, detailing its C2 protocol, the SOCKS/HTTP proxy features, and the DDOS attack features.
The sheer size of the Necurs botnet, even in its worst days, dwarfs all of today’s IoT botnets. The largest IoT botnet ever observed was Mirai Botnet #14 that managed to rack up around 400,000 bots towards the end of 2016.
“The proxy/DDoS module is quite old,” said MalwareTech, a security researcher that has tracked Necurs’ evolution for years. “I imagine it was put in as a potential revenue stream but then they found there was more money in spam.”
Outside a higher revenue stream the Necurs gang stands to earn from spam, we must also take into consideration other reasons why it’s highly unlikely that we’re going to see DDoS attacks from Necurs.
Recurs’ authors have invested time and money into developing a professional, well-oiled cyber-crime machine. There is no reason to risk their steady revenue stream just for the sake of running a DDoS-for-hire service from which they have only to lose.
Mathematically, it makes no sense to destroy three revenue streams (Dridex, Locky, and rentable spamming service) just for the sake of creating and supporting a DDoS booter service.
“Every unencrypted HTTP request reveals information about a user’s behavior, and the interception and tracking of unencrypted browsing has become commonplace. Today, there is no such thing as non-sensitive web traffic, and public services should not depend on the benevolence of network operators.”
Cognitect Vase – Unlocking hidden value in your data ▻http://blog.cognitect.com/blog/2017/1/30/unlocking-hidden-value-in-your-data-1
« Vase is a library for writing declarative, data-driven microservices. A single HTTP service, complete with database integration and data validation, can be created within minutes.
We achieve this acceleration through Vase’s declarative nature- Vase does all of the mundane data-plumbing of a service, so you can focus on delivering value to your customers. The microservices we build with Vase easily evolve and grow to meet new business demands. Individual teams can each evolve their Vase services independently, ensuring that no team is blocked from delivering value. […] »
Everything you need to know about HTTP security headers
Récap à propos des entête X-XSS-Protection, Content Security Policy, HTTP Strict Transport Security (HSTS), HTTP Public Key Pinning (HPKP), X-Frame-Options, X-Content-Type-Options, Referrer-Policy & Cookie Options.
Running Express, Koa and Hapi on HTTP/2
Now that we know what is HTTP/2 and are its advantages, we can start upgrading our Node apps. As you have probably seen, there is NPM module for almost anything. Developers have created two awesome modules for working with HTTP/2 protocol, http2 and spdy. They are using same API design as Node HTTPS API so it will be really simple to get started.
#HTTP is obsolete. It’s time for the distributed, permanent web
Part 2: How IPFS solves these problems
We’ve discussed HTTP’s problems (and the problems of #hypercentralization). Now let’s talk about how IPFS, and how it can help improve the web.
IPFS fundamentally changes the way we look for things, and this is it’s key feature. With HTTP, you search for locations. With #IPFS, you search for content.
Let me show you an example. This is a file on a server I run:
Instead of looking for a centrally-controlled location and asking it what it thinks /img/neocitieslogo.svg is, what if we instead asked a distributed network of millions of computers not for the name of a file, but for the content that is supposed to be in the file?
This is precisely what IPFS does.
#à_lire (mais en attendant si vous l’avez lu ↓↓↓ ) ;-)
Gamification of DDoS attacks
It was only a matter of time:
A Turkish hacking crew is luring participants to join its DDoS platform to compete with peers to earn redeemable points that are exchangeable for hacking tools and click-fraud software. The goal, security researchers say, is to “gamify” DDoS attacks in order to attract a critical mass of hackers working toward a unified goal.
Hackers are recruited via Turkish Dark Web hacker forums, and to participate in the program they must download the Surface Defense collaboration program and register. Surface Defense runs locally on a computer. Users congregate online within the program and can communicate and compare points they earn.
Next, participants are required to download a DDoS attack tool called Sledgehammer. Sledgehammer is software [with an unadvertised backdoor] that comes preconfigured to perform HTTP-based Slowloris-type DDoS assaults against 24 preselected sites determined by the software’s author.
Users receive a point for every 10 minutes they attack one of the websites. With those point they can obtain hacking software.
The list of websites can be found here
Two separate things happened on August 9, 1995, both by chance emerging from Northern California though they had little else in common. The first was a scheduled event: the initial public offering (IPO) by Netscape, a startup tech firm designed to make software to power the Internet.
I remember walking through the hallway at work that morning, probably heading for a coffee refill, when I saw a clump of co-workers and magazine editors talking anxiously. I thought they were talking about the Netscape IPO, but they weren’t. “Jerry Garcia died,” one of the editors said to me. “We need to replace the front page and get a new headline up, stat.”
In the serie Halt And Catch Fire, season 3 they describe a time with networks, services, client/server. But no Internet. That’s the late 80’s.
But years later, it ends with 3 new things, building the prospective Internet : new protocols (http and html), new ideas (Navigator of the Cern - Europe) and new paradigms (new things but no goal, no purpose… that’s the liberty).
As said by Joe MacMillan. It’s a door to a new lighting world to be build.
Je n’ai fait que prendre le principe d’hypertexte et le relier au principe du TCP et du DNS et alors – boum ! – ce fut le World Wide Web !
Internet a été inventé sur CETTE machine :
Je m’oppose fortement à l’idée que le WWW constitue l’essentiel de l’internet. Le HTTP n’est qu’un protocole parmi d’autres et devrait être traité en conséquence. Ce qui est intéressant dans cet article c’est l’amalgame d’idées libertaires à l’américaine avec le soi disant progrès technique. Il est un artefact idéologique qui ouvre une vue sur le développement historique.
Blockchain can help prevent DDoS attacks
Several products such as #Blockstack, #Nebulis and #Maidsafe are facilitating the decentralisation of the Domain Name System (DNS).
This would make it much more difficult to launch attacks such as the one suffered by Dyn DNS.
“By using the Bitcoin blockchain to bind the name to a public key and DNS information, Blockstack allows anyone to register a name while simultaneously ensuring that only the name’s owner can control it."
“If the Dyn attackers wanted to knock websites offline in Blockstack, they would have to attack either the individual sites or attack the Bitcoin network itself. Even then, all the Dyn attackers could do is slow down name updates,”
Another project similar to the Blockstack vision is a platform called Nebulis, which uses Ethereal under the hood. [...] The difference is, this platform uses IPFS as a replacement for HTTP and utilizes the Ethereum blockchain for DNS capabilities.
Maidsafe focuses on removing centralised servers and creates an encrypted distributed framework across a peer-to-peer network.
GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser. It provides fast and valuable #http statistics for system administrators that require a visual server report on the fly.
The “.well-known” directory on webservers (aka: RFC 5785)
I first came across the concept of the directory named ’.well-known’ when automating Let’s Encrypt, the free SSL certificate authority. It didn’t strike me as abnormal to have a validation happen via an HTTP or HTTPS GET request. Those Let’s Encrypt validation URLs usually point to site.tld/.well-known/acme-challenge/random-key.txt.
At first I thought this was just the random URL used by Let’s Encrypt, but today I learned there’s more to it. Allow me to introduce RFC-5785.
More on Mirai, and more than Mirai
Akamai says Mirai was not alone:
While Akamai confirmed that the Mirai botnet was part the attack, the company also said that Mirai was only “a major participant in the attack” and that at least one other botnet might have been involved, though they couldn’t confirm that the attacks were coordinated.
Akamai refers to Mirai as Kaiten and has it documented here:
More on the released source code of Mirai which confirms the use of GRE flooding, one of the techniques used on top of DNS Water Torture:
A copy of the source code files provided to SecurityWeek includes a “read” where the author of Mirai explains his reasons for leaking the code and provides detailed instructions on how to set up a botnet.
Mirai, believed to have made rounds since May 2016, infects IoT devices protected by weak or default credentials. Once it hijacks a device, the threat abuses it to launch various types of DDoS attacks, including less common UDP floods via Generic Routing Encapsulation (GRE) traffic.
This was proven through reverse-engineering by
It is still GRE is still an uncommon attack vector, but it was already used during the 2016 Rio games
What cameras, IoT and DVR devices are taking part of Mirai ?
But one researcher, Flashpoint’s Zachary Wikholm, today claimed to have found a single Chinese firm, Hangzhou XiongMai Technologies (XM), that shipped flawed code allowing the perpetrators to potentially amass nearly half a million bots for their malicious network.
Interesting article by F5 which goes in a bit more detail about the two types of GRE flood attacks (Ethernet and IP based)
They also make a reference to the origin of the Mirai name:
It seems that the bot creator named his creation after a Japanese series “Mirai Nikki (The Future Diary)” and uses the nickname of “Anna-senpai” referring to the “Shimoneta” series.
Here are the 61 passwords that powered the Mirai IoT botnet
Some more information on its spread, operations, and code, by Incapsulate.
One of the most interesting things revealed by the code was a hardcoded list of IPs Mirai bots are programmed to avoid when performing their IP scans.
This list is interesting, as it offers a glimpse into the psyche of the code’s authors. On the one hand, it exposes concerns of drawing attention to their activities. A concern we find ironic, considering that this malware was eventually used in one of the most high-profile attacks to date.
HTTP GET floods were already pernicious. For years, attackers have been able to disable web sites by sending a flood of HTTP requests for large objects or slow database queries. Typically, these requests flow right through a standard firewall because hey, they look just like normal HTTP requests to most devices with hardware packet processing. The Mirai attack code takes it a step further by fingerprinting cloud-based DDoS scrubbers and then working around some of their HTTP DDoS mitigation techniques (such as redirection).
Mirai botnet leverages #STOMP Protocol to power DDoS attacks.
STOMP is a simple application layer, text-based protocol [an alternative to other open messaging protocols, such as AMQP (Advanced Message Queuing Protocol] that allows clients communicate with other message brokers. It implements a communication method among for applications developed using different programming languages.
Below the steps of the DDoS STOMP attack:
• A botnet device uses STOMP to open an authenticated TCP handshake with a targeted application.
• Once authenticated, junk data disguised as a STOMP TCP request is sent to the target.
• The flood of fake STOMP requests leads to network saturation.
• If the target is programmed to parse STOMP requests, the attack may also exhaust server resources. Even if the system drops the junk packets, resources are still used to determine if the message is corrupted.
How Mirai Uses STOMP Protocol to Launch DDoS Attacks
Mirai botnet with 400.000 devices now for rent
A DDoS-for-hire service, run by two hackers going by the pseudonyms Popopret and BestBuy, is now reportedly advertising a Mirai botnet up for rent. The Mirai botnet allegedly comprises of over 400,000 infected bots and may have been sired from the original Mirai source code.
renting the botnet does not come cheap. Customers desiring to rent the botnet must do so for a minimum of two weeks. However, clients can determine the amount of bots, the attack duration and the DDoS cool down (a term which refers to the length of time between consecutive attacks).
Popapret and BestBuy’s Mirai botnet is a more evolved version of the original botnet. The two hackers have added new features, such as brute-force attacks via SSH and support for exploiting zero-day vulnerabilities. According to two security researchers, going by handle 2sec4u and MalwareTech on Twitter, some of the newly created Mirai botnets can now carry out DDoS attacks by spoofing IP addresses and may also be capable of bypassing DDoS mitigation systems.
Understanding the Mirai Botnet
In this paper, we provide a seven-month retrospective analysis
of Mirai’s growth to a peak of 600k infections and a history of its DDoS victims. By combining a variety of measurement perspectives, we analyse how the botnet emerged, what classes of devices were affected, and how Mirai variants evolved and competed for vulnerable hosts. Our measurements serve as a lens into the fragile ecosystem of IoT devices. We argue that Mirai may represent a sea change in the evolutionary development of bonnets—the simplicity through which devices were infected and its precipitous growth, demonstrate that novice malicious techniques can compromise enough low-end
devices to threaten even some of the best-defended targets.
To address this risk, we recommend technical and nontechnical
interventions, as well as propose future research directions.
How to get started with #Varnish Cache 5.0 with experimental #HTTP/2 support
Varnish Cache 5.0 is now available. In Varnish Cache 5.0 there is experimental support for HTTP/2. By “experimental” we mean that it works, but we haven’t had any big production sites on it yet. We are eager for you to use it, test it and get your hands dirty with it and to get your input.
Here is how you enable it:
1) Install Varnish Cache 5.0.0.
2) Install #Hitch TLS proxy (www.hitch-tls.org) with ALPN support for terminating client TLS.
3) Configure ALPN, PROXYv2 and finally HTTP/2!
Dans les archives récentes de visionscarto (2012), il y a cette évocation de la difficile sinon impossible représentation cartographiques des frontières dans le secteur du Cachemire. C’est toujours d’actualité : rien n’est jamais vraiment simple en carto.
Le Cachemire, un casse-tête cartographique
L’Inde est une grande démocratie, où la liberté de la presse est garantie par l’article 19 1 (a) de la Constitution. Mais quand le magazine anglais The Economist a publié, en mai 2011, un long article d’analyse sur les relations et les rivalités indo-pakistanaises et le conflit du Cachemire, la grande démocratie a quand même sorti les griffes pour expliquer que — même en démocratie — la presse ne peut pas faire « n’importe quoi ». Non pas à cause de l’article lui-même, mais en raison de son accompagnement cartographique — d’une facture très classique —, retraçant la géographie de ce conflit gelé depuis des décennies.
After Kashmir attack, US media threaten to support India in war with Pakistan - World Socialist Web Site
With India pledging to “punish” Pakistan for the attack Islamist militants mounted on an Indian army base at Uri in the disputed Kashmir region, a concerted campaign has begun in the US media indicating support for aggressive Indian action against Pakistan. Given that India and Pakistan are nuclear-armed states that have fought four bloody wars against each other, this campaign is extraordinarily reckless.
On Wednesday, the Wall Street Journal carried a column on Indian Prime Minister Narendra Modi’s policy titled “Modi’s Restraint Toward Pakistan.” It wrote, “Modi is practicing restraint for now, but Islamabad can’t rely on that continuing. Modi’s offer of cooperation, if rejected, will become part of a case for making Pakistan even more of a pariah nation than it already is.”
c’est aussi le sujet du “dessous des cartes”, sur #google_maps et les frontières Inde / Pakistan / Chine
“The purpose of this page is to explain what’s wrong with HTTP content negotiation and why you should not suggest HTTP content negotiation as a solution to a problem.”
Also, it’s a problem that negotiation by natural language is about negotiating by a characteristic of the human user as opposed to a characteristic of software. That is, the browser doesn’t really know the characteristics of the human user without the human user configuring the browser. Since negotiation by natural language is so rarely useful, it doesn’t really make sense for the browser to advertise the configuration option a lot or insist that the user makes the configuration before browsing the Web. As a result, the browser doesn’t really know about the user’s language skills beyond guessing that the user might be able to read the UI language of the browser. And that’s a pretty bad guess. It doesn’t give any information about the other languages the user is able to read and the user might not even be able to actually read meaningful prose in the language of the browser UI. (You can get pretty far with the browser UI simply by knowing that you can type addresses into the location bar and that the arrow to the left takes you back to the previous page.)
Cette partie me parait fausse : de fait, dans beaucoup de pays, quand on installe un ordinateur, ça l’installe dans notre langue principale, et du coup c’est le cas du navigateur aussi. Mon navigateur est configuré en français par défaut, si je vivais ailleurs il serait configuré en anglais ou en chinois.
Du coup cette information de langue principale est bien là par défaut SANS que les utilisateurs aient eu à configurer quoi que ce soit en plus, l’immense majorité du temps.
Donc si : on a cette information, et elle peut servir à afficher le site dans la langue principale du visiteur SI le site contient cette langue dans les langues possibles (si mon interface existe en français, anglais, espagnol, chinois, et que mon visiteur a son navigateur en chinois, bah par défaut je lui fais déjà utiliser cette langue sans qu’il ait eu à rien faire : ET il peut toujours changer aussi après, s’il préfère).
@rastapopoulos dans mon cas, je m’empressais avant de changer la configuration de langue de mon navigateur, pour mettre l’anglais en priorité, parce que je tombais trop souvent sur des sites où le contenu en français était moins riche, ou moins à jour.
Mais cela fait quelques années que je ne m’en préoccupe plus, les sites n’utilisant de toute façon plus trop la négociation de contenus.
Sécurisez votre site web avec les headers HTTP
CORS headers (Access-Control-Allow-Origin)
Une présentation lumineuse du header CSP par Nicolas Hofmann :
Mise en place d’un #monitoring avec Report-URI + Report-Only
Ironically, CSP is too efficient in some browsers — it creates bugs with bookmarklets. So, do not update your CSP directives to allow bookmarklets. We can’t blame any one browser in particular; all of them have issues:
Most of the time, the bugs are false positives in blocked notifications. All browser vendors are working on these issues, so we can expect fixes soon. Anyway, this should not stop you from using CSP.
Par Nicolas Hofmann toujours, évangéliste CSP :)
LinkChecker is a free, GPL licensed website validator. LinkChecker checks links in web documents or full websites. It runs on Python 2 systems, requiring Python 2.7.2 or later. Python 3 is not yet supported.
– recursive and multithreaded checking and site crawling
– output in colored or normal text, HTML, SQL, CSV, XML or a sitemap graph in different formats
– HTTP/1.1, HTTPS, FTP, mailto:, news:, nntp:, Telnet and local file links support
– restriction of link checking with regular expression filters for URLs
– username/password authorization for HTTP and FTP and Telnet
– honors robots.txt exclusion protocol
– Cookie support
– HTML5 support
Google adds HSTS support on youtube.com domain in addition to google.com
“HSTS prevents people from accidentally navigating to HTTP URLs by automatically converting insecure HTTP URLs into secure HTTPS URLs. Users might navigate to these HTTP URLs by manually typing a protocol-less or HTTP URL in the address bar, or by following HTTP links from other websites.”
End July Google already added HSTS to www.google.com
and another interesting article here:
95% of HTTPS servers vulnerable to trivial MITM attacks, according to Netcraft.
You can activate HSTS by just adding one line in your server config:
[= 1 year]