DBpedia – A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia
« DBpedia is a crowd-sourced community effort to extract structured content from the information created in various Wikimedia projects. This structured information resembles an open knowledge graph (OKG) which is available for everyone on the Web. […] DBpedia data is served as Linked Data, which is revolutionizing the way applications interact with the Web. One can navigate this Web of facts with standard Web browsers, automated crawlers or pose complex queries with SQL-like query languages (e.g. SPARQL). Have you thought of asking the Web about all cities with low criminality, warm weather and open jobs? That’s the kind of query we are talking about. »
►https://wiki.dbpedia.org/about #datasets #knowledge #graph #rdf
]]>Learn basic #html
▻https://hackernoon.com/learn-basic-html-be230361457?source=rss----3a8144eabfe3---4
HTML is the language for creating Web pages and the HTML elements are actually blocks that are used to build a page. Learn the basics to become a front-end professional.(This article was initially posted on ▻https://www.developermate.com)HTML stands for HyperText Markup Language. The first web pages was released in 1990 and the pages were only used for presentation. Today the web is a very important piece in our daily lives. You may use many different web browsers to look at web-pages, like: Google Chrome, Opera, Internet Explorer and Firefox.HTML Page StructureIt is important to understand the HTML page structure.The <head> is used for the title and meta-tags. Only content inside the <body> section (the white area) is displayed by the web browser.HTML TagsHTML tags are (...)
]]>Qgis2threejs plugin documentation
▻https://qgis2threejs.readthedocs.io/en/docs
Qgis2threejs plugin is a QGIS plugin, which visualizes DEM data and vector data in 3D on web browsers.
]]>How GitHub became the nexus of software automation | ZDNet
▻https://www.zdnet.com/article/how-github-sneaked-up-on-us-to-become-the-nexus-of-software-automation
To call GitHub a website is to call Italy a place to eat. GitHub is the leading practitioner of an emerging marketplace — and yes, it may legitimately be called a “market” because it does generate revenue. It earned, by several estimates, over $200 million in revenue in 2017, and was evidently valuable enough to Microsoft to prompt it to purchase GitHub outright, in a $7.5 billion all-stock deal last June.
It is accurate and fair to say that GitHub created a market in the supply of open-source software, and the automation of its deployment. There are other competitors in this market, most notably GitLab and Atlassian’s Bitbucket. It’s the presence of those players that legitimizes this market.
What GitHub has become is the most effective example to date of a web service that absorbs the function of an entire industry’s supply chain. Open-source software has been shared online in the past, with SourceForge being one of the most effective practitioners. But the distribution of software through SourceForge, and sites like it, takes place using a content management system — a platform best suited for folks using web browsers.
What does deserve further scrutiny as time goes on, however, is how this deal, by legitimizing the open-source delivery pipeline as a top-tier industry, will alter the character of the open-source movement. If it was ever truly a counter-culture, it certainly isn’t one now. Although GitHub’s profitability may not be directly due to the popularity of sharing code, it is indeed tied to the automated, pipelined supply chain to which open source gave rise. If that model is truly as influential as open source proponents assert it to be, then nothing Microsoft would do to change it one way or the other, in the long run, should have any noticeable effect.
#Logiciel_libre #GitHub #Industrie_du_logiciel #Neurocapitalisme
]]>9 Great Tools for Algo Trading
▻https://hackernoon.com/9-great-tools-for-algo-trading-e0938a6856cd?source=rss----3a8144eabfe3--
Photo by Adrian Curiel on UnsplashIn the last 5–10 years algorithmic trading, or algo trading, has gained popularity with the individual investor. The rise in popularity has been accompanied by a proliferation of tools and services, to both test and trade with algorithms. I’ve put together a list of 9 tools you should consider using for your algo trading process.Web Services:The following are managed-services that you can use through web browsers, and don’t require much setup from the user. As someone who’s recently started in this field, I found it easy for new algo traders to try out.(1) Quantopian:A Boston-based crowd-sourced hedge fund, Quantopian provides an online IDE to backtest algorithms. Their platform is built with python, and all algorithms are implemented in Python. When testing (...)
#algotrading #finance #crypto-trading-tools #algo-trading-tools #crypto-trading
]]>How to take advantage of Local Storage in your #react projects
▻https://hackernoon.com/how-to-take-advantage-of-local-storage-in-your-react-projects-a895f2b2d3
And why you ought to.Local Storage is a Web API native to modern web browsers. It allows websites/apps to store data (simple and limited) in the browser, making that data available in future browser sessions.Before diving into the tutorial, it may be unclear why you’d want to even use Local Storage in your React apps.There’s plenty of reasons and use cases, more than I can imagine, but here’s a few I’ve discovered.A simple, fake backend for your frontend React projects — It’s often nice to add the appearance of a backend/database to your frontend portfolio projects. The extra functionality will take your app to the next level, improve the user experience and impress potential employers.Experiment with different states while developing — When working on an app, it’s often useful or necessary for (...)
#programming #javascript #web-development #front-end-development
]]>Edward Snowden’s New App Uses Your Smartphone to Physically Guard Your Laptop
▻https://theintercept.com/2017/12/22/snowdens-new-app-uses-your-smartphone-to-physically-guard-your-laptop
Like many other journalists, activists, and software developers I know, I carry my laptop everywhere while I’m traveling. It contains sensitive information; messaging app conversations, email, password databases, encryption keys, unreleased work, web browsers logged into various accounts, and so on. My disk is encrypted, but all it takes to bypass this protection is for an attacker — a malicious hotel housekeeper, or “evil maid,” for example — to spend a few minutes physically tampering with it without my knowledge. If I come back and continue to use my compromised computer, the attacker could gain access to everything.
Edward Snowden and his friends have a solution. The NSA whistleblower and a team of collaborators have been working on a new open source Android app called Haven that you install on a spare smartphone, turning the device into a sort of sentry to watch over your laptop. Haven uses the smartphone’s many sensors — microphone, motion detector, light detector, and cameras — to monitor the room for changes, and it logs everything it notices. The first public beta version of Haven has officially been released; it’s available in the Play Store and on F-Droid, an open source app store for Android.
#haven #surveillance
▻https://github.com/guardianproject/haven
How to defend your website with ZIP bombs
▻https://blog.haschek.at/post/f2fda
So it turns out #ZIP compression is really good with repetitive data so if you have a really huge text file which consists of repetitive data like all zeroes, it will compress it really good. Like REALLY good.
As 42.zip shows us it can compress a 4.5 peta byte (4.500.000 giga bytes) file down to 42 kilo bytes. When you try to actually look at the content (extract or decompress it) then you’ll most likely run out of disk space or RAM.
Sadly, web browsers don’t understand ZIP, but they do understand GZIP.
So firstly we’ll have to create the 10 giga byte GZIP file filled with zeroes. We could make multiple compressions but let’s keep it simple for now.
]]>Japan’s Rakuten Is Betting on a Future Without Apps
▻https://www.bloomberg.com/news/articles/2017-04-04/japan-s-rakuten-bets-on-post-app-future-with-new-gaming-service
“The e-commerce company unveiled Rakuten Games on Tuesday, seeking to deliver titles that don’t have to be installed on phones or personal computers. The games can be played on web browsers or within other apps, making it easier for users to play with each other without having to wait for new software to be loaded onto their devices.”
]]>Gomix - The easiest way to build the app or bot of your dreams
▻https://gomix.com/about
For people who were around at that time, they’ll understand Gomix easily: We’re bringing “View Source” back. Of course, they didn’t literally take “View Source” out of web browsers, but the ability to just look at the code behind something, and tweak it, and make your own thing, was essential to making the Internet fun, and weird, and diverse, in its early days.
]]>The Ad-Blocking Hacker Making Your Browser More Paranoid
▻http://www.wired.com/2016/05/meet-ad-blocking-hacker-making-browser-paranoid
We’ve lost control of our web browsers. Sure, we tell them what sites to load. But after that, browsers do the bidding of someone else’s server, executing code that could, for all we know, install malware on our phones and computers to spy on our every digital move. And sometimes they do. In 2009, The New York Times inadvertently served an ad that redirected readers to a page claiming that their computers were already infected with malware. It urged them to download fake antivirus software (...)
]]>Memory-Efficient MeSH Tagging with Character-Based Hierarchical Context Classification
▻https://research.science.ai/article/memory-efficient-mesh-tagging
Medical Subject Headings (MeSH) represents a monumental effort in categorizing the breadth and depth of concepts within the biomedical sciences, and has been instrumental in improving the indexing of biomedical scholarly articles. There have been many efforts over the years to create a completely automated MeSH tagging system, but such a system has proven to be quite challenging. This is in no small part due to the vast scope of MeSH. Many of these systems require large resource footprints due to the necessity of performing computation on large sets of documents, or over large numbers of iterations or steps. We present here a memory-efficient automatic MeSH tagging system written entirely in JavaScript that is lightweight enough to be run in the web browser, using hierarchical context classification with character-based convolutional and recurrent neural networks. These deep neural networks can be run in modern web browsers or in Node.js through neocortex.js, an open source library developed as a part of this work. The system is evaluated on over 15 million abstracts in MEDLINE/PubMED, with resulting performance similar to that of existing systems, but at a significantly reduced computational cost and model complexity.
]]>TeX.js: Typesetting for the Web
▻https://davidar.io/TeX.js
This page introduces #TEX.js, a JavaScript library for performing high-quality typesetting within web browsers. It is designed to require only basic familiarity with Hyper-Text Markup Language (HTML) from the author — no knowledge about JavaScript or Cascading Style Sheets (CSS) is necessary. Although not as sophisticated as the TEX typesetting engine, the output produced is of much higher quality than that which can be obtained by unstyled HTML.
]]>Vivliostyle Project — open source, web browser based #CSS typesetting engine
▻http://vivliostyle.com/project
▻http://vivliostyle.github.io/vivliostyle.js
exemple :
The concept of this project is to make a new typesetting system fitting for the digital publishing era based on web browser technology.
We are aiming for:
– enhancing typography and layout capability of web browsers, to be used as typesetting engines for both electronic and print publishing
also implementing CSS typesetting features with JavaScript (polyfills)
cooperating with the W3C standardization of CSS typesetting specifications, and advancing implementation
]]>The Slow Death of ‘Do Not Track’
▻http://www.nytimes.com/2014/12/27/opinion/the-slow-death-of-do-not-track.html?_r=0
HAYMARKET, Va. — FOUR years ago, the Federal Trade Commission announced, with fanfare, a plan to let American consumers decide whether to let companies track their online browsing and buying habits. The plan would let users opt out of the collection of data about their habits through a setting in their web browsers, without having to decide on a site-by-site basis. The idea, known as “Do Not Track,” and modeled on the popular “Do Not Call” rule that protects consumers from unwanted telemarketing calls, is simple. But the details are anything but. Although many digital advertising companies agreed to the idea in principle, the debate over the definition, scope and application of “Do Not Track” has been raging for several years. Now, finally, an industry working group is expected to propose (...)
]]>Bokeh - a Python interactive visualization library
▻http://bokeh.pydata.org
#Bokeh is a #Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of #D3.js, but also deliver this capability with high-performance interactivity over very large or streaming datasets.
]]>HTML5 Canvas Fingerprint — Widely Used Unstoppable Web Tracking Technology
▻http://thehackernews.com/2014/07/html5-canvas-fingerprint-widely-used.html
Basically, web browsers uses different image processing engines, export options, compression level, so each computer draws the image slightly differently, the images can be used to assign each user’s device a number (a fingerprint) that uniquely identifies it i.e. Browser fingerprinting. According to a research paper published by computer security experts from Princeton University and KU Leuven University in Belgium, the Canvas fingerprint tracking has made it more difficult for even the sophisticated computer users to protect their privacy.
#c
]]>LICEcap, #screencast as animated #GIF
▻http://www.cockos.com/licecap
“LICEcap can capture an area of your desktop and save it directly to .GIF (for viewing in web browsers, etc)” Tags: GIF animé screencast #enregistrement #écran GIF #logiciel #Windows #mac
]]>DNSSEC/TLSA Validator add-on for Web Browsers
▻https://www.dnssec-validator.cz
DNSSEC/TLSA Validator is a web browser add-on which allows you to check the existence and validity of #DNS #Security Extensions (#DNSSEC) records and Transport Layer Security Association (#TLSA) records related to #domain names in the address-bar in your browser. The results of these checks are displayed using icons and information texts in the page’s address-bar or tool-bar. Currently, Internet Explorer (IE), Mozilla #Firefox (MF) and Google Chrome (GC) web browsers are supported.
Je ne sais pas ce que ça vaut réellement mais ça n’a pas l’air mal du tout
]]>Unreported Side Effects of Drugs Are Found Using Internet Search Data, Study Finds - NYTimes.com
▻http://www.nytimes.com/2013/03/07/science/unreported-side-effects-of-drugs-found-using-internet-data-study-finds.html
Using automated software tools to examine queries by six million Internet users taken from Web search logs in 2010, the researchers looked for searches relating to an antidepressant, paroxetine, and a cholesterol lowering drug, pravastatin. They were able to find evidence that the combination of the two drugs caused high blood sugar.
The study, which was reported in the Journal of the American Medical Informatics Association on Wednesday, is based on data-mining techniques similar to those employed by services like Google Flu Trends, which has been used to give early warning of the prevalence of the sickness to the public.
L’article original ( abstract seulement) Web-scale pharmacovigilance: listening to signals from the crowd ▻http://jamia.bmj.com/content/early/2013/02/05/amiajnl-2012-001482.abstract
He turned to computer scientists at Microsoft, who created software for scanning anonymized data collected from a software toolbar installed in Web browsers by users who permitted their search histories to be collected. The scientists were able to explore 82 million individual searches for drug, symptom and condition information.
The researchers first identified individual searches for the terms paroxetine and pravastatin, as well as searches for both terms, in 2010. They then computed the likelihood that users in each group would also search for hyperglycemia as well as roughly 80 of its symptoms — words or phrases like “high blood sugar” or “blurry vision.”
They determined that people who searched for both drugs during the 12-month period were significantly more likely to search for terms related to hyperglycemia than were those who searched for just one of the drugs. (About 10 percent, compared with 5 percent and 4 percent for just one drug.)
(…)
The researchers said they were surprised by the strength of the “signal” that they detected in the searches and argued that it would be a valuable tool for the F.D.A. to add to its current system for tracking adverse effects.
(…)
“I think there are tons of drug-drug interactions — that’s the bad news,” Dr. Altman said. “The good news is we also have ways to evaluate the public health impact.
]]>2011: A Badass JavaScript Year In Review - Badass JavaScript
►http://badassjs.com/post/15082876071/2011-a-badass-javascript-year-in-review
2011 has been a great year for JavaScript. Web browsers have given us great new tools to use and we have taken web applications to new heights, competing with native applications and bringing sexy back to the web with countless impressive demos. A week or so ago, we put out a survey to all of you asking what you thought the most badass JavaScript demo of 2011 was. Of course, this would be a boring post if we just said what the outcome was, so we’re going to list the top JavaScript accomplishments of 2011 here, from demos to libraries and applications themselves.
]]>Introducing WebAPI ✩ Mozilla Hacks
►http://hacks.mozilla.org/2011/08/introducing-webapi
Mozilla would like to introduce WebAPI with the goal to provide a basic HTML5 phone experience within 3 to 6 months.
The current situation
Where we are today, there’s a clear distinction between the Open Web and native APIs and how things have to be built. As many developers are aware of, we need consistent APIs across web browsers, operating systems and devices to be able to build something for the world, not just a specific device or vendor. We need a way to take the web to the next step.
What is WebAPI?
WebAPI is an effort by Mozilla to bridge together the gap, and have consistent APIs that will work in all web browsers, no matter the operating system. Specification drafts and implementation prototypes will be available, and it will be submitted to W3C for standardization. Security is a very important factor here, and it will be a mix of existing security measurements (e.g. asking the user for permission, like Geolocation) or coming up with new alternatives to ensure this.
]]>HTML5 Rocks - How Browsers Work: Behind the Scenes of Modern Web Browsers
►http://www.html5rocks.com/en/tutorials/internals/howbrowserswork
Au bas mot 676 organisations certifient - mal - la #sécurité des sites #internet ; #cybersécurité #certification
An Attack Sheds Light on Internet Security Holes
▻http://www.nytimes.com/2011/04/07/technology/07hack.html
“The encryption used by many Web sites to prevent eavesdropping on their interactions with visitors is not very secure. This technology is in use when Web addresses start with “https” (in which “s” stands for secure) and a closed lock icon appears on Web browsers. These sites rely on third-party organizations, like Comodo, to provide “certificates” that guarantee sites’ authenticity to Web browsers.”
“But many security experts say the problems start with the proliferation of organizations permitted to issue certificates. Browser makers like Microsoft, Mozilla, Google and Apple have authorized a large and growing number of entities around the world — both private companies and government bodies — to create them. Many private “certificate authorities” have, in turn, worked with resellers and deputized other unknown companies to issue certificates in a “chain of trust” that now involves many hundreds of players, any of which may in fact be a weak link.”
]]>