industryterm:open source tool

  • Google Takes Its First Steps Toward Killing the URL | WIRED
    https://www.wired.com/story/google-chrome-kill-url-first-steps

    The Chrome team’s efforts so far focus on figuring out how to detect URLs that seem to deviate in some way from standard practice. The foundation for this is an open source tool called TrickURI, launching in step with Stark’s conference talk, that helps developers check that their software is displaying URLs accurately and consistently. The goal is to give developers something to test against so they know how URLs are going to look to users in different situations. Separate from TrickURI, Stark and her colleagues are also working to create warnings for Chrome users when a URL seems potentially phishy. The alerts are still in internal testing, because the complicated part is developing heuristics that correctly flag malicious sites without dinging legitimate ones.*

    For Google users, the first line of defense against phishing and other online scams is still the company’s Safe Browsing platform. But the Chrome team is exploring complements to Safe Browsing that specifically focus on flagging sketchy URLs.
    Google

    “Our heuristics for detecting misleading URLs involve comparing characters that look similar to each other and domains that vary from each other just by a small number of characters,” Stark says. “Our goal is to develop a set of heuristics that pushes attackers away from extremely misleading URLs, and a key challenge is to avoid flagging legitimate domains as suspicious. This is why we’re launching this warning slowly, as an experiment.”

    #internet #Google #disruption

  • How I started doing load #testing on #graphql without writing a single Query
    https://hackernoon.com/how-i-started-doing-load-testing-on-graphql-without-writing-a-single-que

    EasyGraphQLSome time ago I was working on a GraphQL project that includes activities and each activity can have some comments with the info of the user that created the comment. The first thing that you might think is that it is a problem of query n + 1 , and yes; it is!I decided to implement dataloaders but for some reason, there was an error on the implementation, so it wasn’t caching the query and the result was a lot of request to the database. After finding that issue I implemented it on the right way reducing the queries to the database from 46 to 6.That’s why I decided to create an open source tool that will help me create queries and make load tests just passing my GraphQL schema.How it works:easygraphql-load-tester can be used on two ways for the moment; the first one is using (...)

    #javascript #artillery #load-testing

  • Sneak Peak into Apache #zookeeper
    https://hackernoon.com/sneak-peak-into-apache-zookeeper-10417393765?source=rss----3a8144eabfe3-

    Apache Zookeeper is open source tool from Apache Foundation. Originally developed at Yahoo. Thanks Yahoo for the Zookeeper.Zookeeper is written in Java and it is platform independent.What is Distributed Systems?Multiple independent computers connected together and appears as single computer to the users. Distributed System communicate through network by passing messages. All components in distributed system interact with each other to performs subsets of tasks to achieve common goalsWhy to use Distributed System?Reliability : System will continue to run even if one or more servers in Distributed system fails.Scalability: System can be horizontally upscale and downscale as per the workload requirement.Challenges of Distributed System?Race Condition: A race condition occurs when two or (...)

    #devops #distributed-systems #apache-zookeeper #linux

  • How to justify the price of your service on your Landing page (by explaining the service properly)
    https://hackernoon.com/how-to-justify-the-price-of-your-service-on-your-landing-page-by-explain

    Before and After — Laravel FactoryIn this article, I will be showing the problems Laravel Factory’s website has that are not enabling it so justify the price properly since this is saving a few hours of work from open source tools that customer can do it themselves (and have been doing it themselves until now).What’s Laravel Factory?Laravel Factory is a tool that enables you to setup Laravel apps in much less time and with a super easy process, saving around 15–20 hours per setup.The processThis will be specific to Laravel Factory, if you want to learn my 5 step process to turn visitors into customers that you can apply to your website, here’s my 100% free email course.Things to keep in mindThis product is mainly targeting agencies or freenlancers that have to do this type of work several times a (...)

    #landing-pages #growth #conversion-optimization #marketing #saas

  • Centralised #logging and #monitoring of a Distributed System and Application using ELK
    https://hackernoon.com/centralised-logging-and-monitoring-of-a-distributed-system-and-applicati

    IntroductionNowadays, centralized logging and monitoring have become an integral part of our technology stack. It empowers organizations to gain important insights into operation, security, and infrastructure. We have different types of service-specific metrics and logs. Metrics and Log files from different distributed web servers, applications, and operating systems can be collected and used together with a centralized overview of useful insights.We have some very popular SaaS-based logging and monitoring tools e.g. Datadog, Splunk, Loggly and many others.ImplementationThe ELK Stack is most commonly used open source tool for this kind of purpose. Users can use their own ELK deployment or can use managed #elasticsearch service e.g. Amazon ElasticSearch, Elastic Cloud.ELK is used because (...)

    #aws #kibana

  • How we Built pythonjobs.github.io
    https://hackernoon.com/building-pythonjobs-github-io-6119cd708802?source=rss----3a8144eabfe3---

    Modern open source tools are amazingA while ago, a colleague (@salimfhadley) came over to my desk and mentioned an idea that he’d had:how hard would it be to build a job site that used git pull requests for new job submissions?At the time, the #python.org jobs board had been down for maintenance for nearly a year, and there wasn’t any obvious free, moderated place to list or view job opportunities. Coincidentally, at that time, our employer was hiring, and Sal was heavily involved with that process.Sal’s idea was to to have a static site, using #github pull requests to manage submissions.This idea is awesome for a number of reasons:Lots of tools and services exist to make managing change requests from strangers easyThe entire process is public, transparent, and well understoodHaving a slightly (...)

    #recruiting #open-source #python-jobs

  • ‘GameMode’ is a new tool that can improve gaming performance on Linux
    https://www.omgubuntu.co.uk/2018/04/feral-interactive-gamemode-linux

    Wish you could wring every single drop of performance from your computer when gaming on Linux? Well, a new open source tool from games porting company Feral Interactive wants to help you do exactly that. Say hello to GameMode. GameMode is all about performance Launched today, ‘GameMode’ is a small daemon/lib combo for Linux that allows games […] This post, ‘GameMode’ is a new tool that can improve gaming performance on Linux, was written by Joey Sneddon and first appeared on OMG! Ubuntu!.

  • 15+ Best places to learn #wordpress and become a pro!
    https://hackernoon.com/15-best-places-to-learn-wordpress-and-become-a-pro-ac86b904c85f?source=r

    WordPress.orgWhen I started my career in WordPress a year and a half ago, I just had a faint idea about what WordPress is and had previously used it just for blogging purpose. Until last year, I had no idea of the immense potential this open source tool has got. Quite immediately I had realized that I was infact using WordPress.com and what I was about to start dealing with was WordPress.org.For those who have just stepped into WordPress, you can read my answer on quora on the difference between WordPress.org and WordPress.comNow you may ask that, why should I learn WordPress?For the obvious reason that it’s free, open-source and highly customizable. Moreover, it doesn’t require you to have coding expertise in order to setup a functioning website with WordPress. But, if you wish to set (...)

    #learn-wordpress #web-development #beginners-guide #website-development

  • The Biggest Misconceptions about Artificial Intelligence
    http://knowledge.wharton.upenn.edu/article/whats-behind-the-hype-about-artificial-intelligence-separat

    Knowledge@Wharton: Interest in artificial intelligence has picked up dramatically in recent times. What is driving this hype? What are some of the biggest prevailing misconceptions about AI and how would you separate the hype from reality?

    Apoorv Saxena: There are multiple factors driving strong interest in AI recently. First is significant gains in dealing with long-standing problems in AI. These are mostly problems of image and speech understanding. For example, now computers are able to transcribe human speech better than humans. Understanding speech has been worked on for almost 20 to 30 years, and only recently have we seen significant gains in that area. The same thing is true of image understanding, and also of specific parts of human language understanding such as translation.

    Such progress has been made possible by applying an old technique called deep learning and running it on highly distributed and scalable computing infrastructure. This combined with availability of large amounts of data to train these algorithms and easy-to-use tools to build AI models, are the major factors driving interest in AI.

    It is natural for people to project the recent successes in specific domains into the future. Some are even projecting the present into domains where deep learning has not been very effective, and that creates a lot of misconception and also hype. AI is still pretty bad in how it learns new concepts and extending that learning to new contexts.

    For example, AI systems still require a tremendous amount of data to train. Humans do not need to look at 40,000 images of cats to identify a cat. A human child can look at two cats and figure out what a cat and a dog is — and to distinguish between them. So today’s AI systems are nowhere close to replicating how the human mind learns. That will be a challenge for the foreseeable future.

    Alors que tout est clean, la dernière phrase est impressionnante : « That will be a challenge for the foreseeable future ». Il ne s’agit pas de renoncer à la compréhension/création de concepts par les ordinateurs, mais de se donner le temps de le faire demain. Dans World without mind , Franklin Foer parle longuement de cette volonté des dirigeants de Google de construire un ordinateur qui serait un cerveau humain amélioré. Mais quid des émotions, des sentiments, de la relation physique au monde ?

    As I mentioned in narrow domains such as speech recognition AI is now more sophisticated than the best humans while in more general domains that require reasoning, context understanding and goal seeking, AI can’t even compete with a five-year old child. I think AI systems have still not figured out to do unsupervised learning well, or learned how to train on a very limited amount of data, or train without a lot of human intervention. That is going to be the main thing that continues to remain difficult . None of the recent research have shown a lot of progress here.

    Knowledge@Wharton: In addition to machine learning, you also referred a couple of times to deep learning. For many of our readers who are not experts in AI, could you explain how deep learning differs from machine learning? What are some of the biggest breakthroughs in deep learning?

    Saxena: Machine learning is much broader than deep learning. Machine learning is essentially a computer learning patterns from data and using the learned patterns to make predictions on new data. Deep learning is a specific machine learning technique.

    Deep learning is modeled on how human brains supposedly learn and use neural networks — a layered network of neurons to learn patterns from data and make predictions. So just as humans use different levels of conceptualization to understand a complex problem, each layer of neurons abstracts out a specific feature or concept in an hierarchical way to understand complex patterns. And the beauty of deep learning is that unlike other machine learning techniques whose prediction performance plateaus when you feed in more training data, deep learning performance continues to improve with more data. Also deep learning has been applied to solve very different sets of problems and shown good performance, which is typically not possible with other techniques. All these makes deep learning special, especially for problems where you could throw in more data and computing power easily.

    Knowledge@Wharton: The other area of AI that gets a lot of attention is natural language processing, often involving intelligent assistants, like Siri from Apple, Alexa from Amazon, or Cortana from Microsoft. How are chatbots evolving, and what is the future of the chatbot?

    Saxena: This is a huge area of investment for all of the big players, as you mentioned. This is generating a lot of interest, for two reasons. It is the most natural way for people to interact with machines, by just talking to them and the machines understanding. This has led to a fundamental shift in how computers and humans interact. Almost everybody believes this will be the next big thing.

    Still, early versions of this technology have been very disappointing. The reason is that natural language understanding or processing is extremely tough. You can’t use just one technique or deep learning model, for example, as you can for image understanding or speech understanding and solve everything. Natural language understanding inherently is different. Understanding natural language or conversation requires huge amounts of human knowledge and background knowledge. Because there’s so much context associated with language, unless you teach your agent all of the human knowledge, it falls short in understanding even basic stuff.

    De la compétition à l’heure du vectorialisme :

    Knowledge@Wharton: That sounds incredible. Now, a number of big companies are active in AI — especially Google, Microsoft, Amazon, Apple in the U.S., or in China you have Baidu, Alibaba and Tencent. What opportunities exist in AI for startups and smaller companies? How can they add value? How do you see them fitting into the broader AI ecosystem?

    Saxena: I see value for both big and small companies. A lot of the investments by the big players in this space are in building platforms where others can build AI applications. Almost every player in the AI space, including Google, has created platforms on which others can build applications. This is similar to what they did for Android or mobile platforms. Once the platform is built, others can build applications. So clearly that is where the focus is. Clearly there is a big opportunity for startups to build applications using some of the open source tools created by these big players.

    The second area where startups will continue to play is with what we call vertical domains. So a big part of the advances in AI will come through a combination of good algorithms with proprietary data. Even though the Googles of the world and other big players have some of the best engineering talent and also the algorithms, they don’t have data. So for example, a company that has proprietary health care data can build a health care AI startup and compete with the big players. The same thing is true of industries such as finance or retail.

    #Intelligence_artificielle #vectorialisme #deep_learning #Google

  • How scientists can protect their data from the Trump administration
    (Micah Lee, Feb 2017)

    Very comprehensive text (just like his previous one on how to secure your communication https://seenthis.net/messages/569133) on the technologies of BitTorrent and how you can use it to share your data, on Tor Onion services and how to host hidden websites with it, and about OnionShare and how to use it to share data without colleagues without leaving a trace.

    https://theintercept.com/2017/02/01/how-scientists-can-protect-their-data-from-the-trump-administration

    some scientists have already begun trying to preserve government data they worry will be deleted, altered, or removed, and many are preparing to march on Washington to protest Trump’s dangerous science denialism.

    If you’re an American scientist who’s worried that your data might get censored or destroyed by Trump’s radically anti-science appointees, here are some technologies that could help you preserve it, and preserve access to it.

    – You can use a file-sharing technology called BitTorrent to ensure that your data always remains available to the public, with no simple mechanism for governments to block access to it.

    – You can use Tor onion services — sometimes referred to as the dark web — to host websites containing your data, research, and discussion forums that governments can’t block access to — and that keep your web server’s physical location obscure.

    – And you can use OnionShare, an open source tool that I developed, to securely and privately send datasets to your colleagues to hold onto in case something happens to your copy, without leaving a trace.

    #privacy
    #Tor
    #BitTorrent
    #OnionShare

  • Ed Snowden taught me to smuggle secrets past incredible danger. Now I teach you.
    (Micah Lee, Oct 2014)

    – Explains how Poitras and Snowden set up a secure communication channel using anonymous e-mail, Tor Browser, GPG, and tweeting the figerprint.

    – Explains how he got Greenwald to encrypt his computer. (Greenwald didn’t know how to nor how to use GPG, and got neither of them working)

    – Talks about his involvement in the set-up of communications between Snowden, Greenwald and Poitras prior to the revelations.

    https://theintercept.com/2014/10/28/smuggling-snowden-secrets

    I think it’s helpful to show how privacy technologists can work with sources and journalists to make it possible for leaks to happen in a secure way. Securing those types of interactions is part of my job now that I work with Greenwald and Poitras at The Intercept, but there are common techniques and general principles from my interactions with Snowden that could serve as lessons to people outside this organization.

    [...]

    but in his first email to me, Snowden had forgotten to attach his key, which meant I could not encrypt my response. I had to send him an unencrypted email asking for his key first. His oversight was of no security consequence—it didn’t compromise his identity in any way—but it goes to show how an encryption system that requires users to take specific and frequent actions almost guarantees mistakes will be made, even by the best users.

    [...]

    after creating a customized version of Tails for Greenwald, I hopped on my bike and pedaled to the FedEx office on Shattuck Avenue in Berkeley, where I slipped the Tails thumb drive into a shipping package, filled out a customs form that asked about the contents (“Flash Drive Gift,” I wrote), and sent it to Greenwald in Brazil.

    The (comprehensive) 30-page tutorial Micah wrote about using open source tools to communicate securely:

    Encryption Works: How to Protect Your Privacy (And Your Sources) in the Age of NSA Surveillance
    https://freedom.press/news-advocacy/encryption-works-how-to-protect-your-privacy-and-your-sources-in-the-age-

    The whitepaper covers:

    – A brief primer on cryptography, and why it can be trustworthy
    – The security problems with software, and which software you can trust
    – How Tor can be used to anonymize your location, and the problems Tor has when facing global adversaries
    – How the Off-the-Record (OTR) instant message encryption protocol works and how to use it
    – How PGP email encryption works and best practices
    – How the Tails live GNU/Linux distribution can be used to ensure high endpoint security

    https://web.archive.org/web/20130822041429/https://pressfreedomfoundation.org/sites/default/files/encryption_works.pdf
    backup :https://www.docdroid.net/file/download/vk6cwnN/encryption-works.pdf
    HTML version: https://web.archive.org/web/20130727195447/https://pressfreedomfoundation.org/encryption-works

    #Edward_Snowden #Snowden
    #privacy
    #Tails #GPG #PGP

  • MapBox Integrates Premium Imagery Services for Fast Turnaround - Directions Magazine
    http://www.directionsmag.com/articles/mapbox-integrates-premium-imagery-services-for-fast-turnaround/340162

    MapBox is offering its clients a six-hour turnaround for those needing near real-time satellite imagery. “This is about making it easy for people to take real-time imagery and get it on the Web or in their app in minutes - using our open source tools,” said Chris Herwig, who leads the #Satellite team at #MapBox.

  • Un projet qui pourrait intéresser @fil :

    To illustrate the goals of dat consider the GitHub project, which is a great model of this idea working in a different space. GitHub is built on top of an open source tool called git and provides a user-friendly web application that lets software developers find code written by others, use it in their own programs and improve upon it. In a similar fashion dat will be developed as a set of tools to store, synchronize, manipulate and collaborate in a decentralized fashion on sets of data, hopefully enabling platforms analogous to GitHub to be built on top of it.

    https://github.com/maxogden/dat

    • To summarize, git is inadequate for:

      – real time data (e.g. lots of commits)
      – data filtering/subsets
      – compact history (disk efficient - only store enough to sync)
      – transforming data, as it doesn’t have a concept of data transformations and isn’t a scripting language

      Ouais, marrant, ya quelques mois j’avais commencé à concevoir une sorte de SGBD pour gérer une « graph database » (nosql quoi si on veut), mais pas un gros truc centralisé, une database légère, décentralisée et synchronisable. Je voulais entre autre me baser sur Git pour la partie versionning et synchronisation, mais on voit vite les limites pour des gros gros dépôt. Par exemple celui de mon travail (ou spip-zone) qui fait largement plus qu’un giga, en comprenant des binaires aussi. Ben vu que Git stocke tout l’historique en permanence, c’est carrément plus gros que pour un ckeckout SVN du coup, me semble-t-il.

  • Mapping Mars with Open Planetary Data | MapBox

    cc @fil

    http://mapbox.com/blog/2012-08-26-mapping-mars

    Inspired by the Mars Curiosity rover, I set out to map Mars using all open source tools (like QGIS and TileMill) and open data. The results of these tools and awesome data are stunning. The first shows Mars as “the red planet,” and the second map uses a more divergent color ramp to show Mars’ topographical variation.
    The Red Planet

    You can explore Mars for yourself on the Mars open data mapping website I created. I’ll be updating it as I create new maps using awesome planetary open data.
    Mapping Mars

    My first step along the way was to install the USGS’s Integrated Software for Imagers and Spectrometers ISIS, a software library that makes it pretty convenient to obtain planetary DEM data in the ISIS 3 cube format, which GDAL supports. Thanks to USGS’ great documentation, it was easy to get started with ISIS.

    My next step was to use GDAL to generate the hillshades, color relief, and slope shading. I took advantage of the great guides we have for working with terrain data that we have in our Tilemill Docs section. To get a better idea about what my process looked like, check out the scripts here.

    The last step was to take the finished GeoTiffs into Tilemill to style them using the great new compositing features that Mapnik has.

    #espace #cartographie #mars #planètes #open-data #visualisation

  • Mapping Mars with Open Planetary Data | MapBox
    http://mapbox.com/blog/2012-08-26-mapping-mars

    “Inspired by the Mars Curiosity rover, I set out to map Mars using all open source tools (like QGIS and TileMill) and open data. The results of these tools and awesome data are stunning. The first shows Mars as “the red planet,” and the second map uses a more divergent color ramp to show Mars’ topographical variation.”

    Toujours à fond chez #mapbox, cette fois ils nous pondent des cartes de Mars.

    #map #mars #tiles #gdal #tilemill #topography

  • Open source tool to evaluate redistricting proposals and stop gerrymandering
    http://boingboing.net/2011/11/16/open-source-tool-to-evaluate-r.html

    The redistricting process is one of the most important — yet least understood — aspects of the US political system. It’s full of smoke-filled back room dealmaking by political insiders with little public input. The result? Districts are often drawn by the policial parties themselves — usually the majority party — AKA gerrymandering. Because of this, district lines are altered by lawyers and politicians in ways that don’t accurately reflect the citizens. It’s a rigged process and the public has the power to get involved and keep government in check, but we need to first learn more about how it works.

    #découpage_électoral