• Bertrand Louart, À écouter certains écolos, on a l’impression que les machines nous tombent du ciel !, 2020
    https://sniadecki.wordpress.com/2020/05/14/louart-itw-casaux

    Interview de @tranbert par Nicolas Casaux

    Nicolas Casaux : Je me suis entretenu avec Bertrand Louart, auteur, notamment, de Les êtres vivants ne sont pas des machines (éd. La Lenteur, 2018), animateur de l’émission Racine de Moins Un sur Radio Zinzine, rédacteur du bulletin de critique des sciences, des technologies et de la société industrielle Notes & Morceaux choisis (éd. La Lenteur), contributeur au blog de critique du scientisme Et vous n’avez encore rien vu…, et membre de la coopérative européenne Longo maï où il est menuisier-ébeniste.

    #critique_techno #anti-industriel #écologie #démocratie #acier #production #machine-outil #communalisme

  • Monitoring being pitched to fight Covid-19 was tested on refugees

    The pandemic has given a boost to controversial data-driven initiatives to track population movements

    In Italy, social media monitoring companies have been scouring Instagram to see who’s breaking the nationwide lockdown. In Israel, the government has made plans to “sift through geolocation data” collected by the Shin Bet intelligence agency and text people who have been in contact with an infected person. And in the UK, the government has asked mobile operators to share phone users’ aggregate location data to “help to predict broadly how the virus might move”.

    These efforts are just the most visible tip of a rapidly evolving industry combining the exploitation of data from the internet and mobile phones and the increasing number of sensors embedded on Earth and in space. Data scientists are intrigued by the new possibilities for behavioural prediction that such data offers. But they are also coming to terms with the complexity of actually using these data sets, and the ethical and practical problems that lurk within them.

    In the wake of the refugee crisis of 2015, tech companies and research consortiums pushed to develop projects using new data sources to predict movements of migrants into Europe. These ranged from broad efforts to extract intelligence from public social media profiles by hand, to more complex automated manipulation of big data sets through image recognition and machine learning. Two recent efforts have just been shut down, however, and others are yet to produce operational results.

    While IT companies and some areas of the humanitarian sector have applauded new possibilities, critics cite human rights concerns, or point to limitations in what such technological solutions can actually achieve.

    In September last year Frontex, the European border security agency, published a tender for “social media analysis services concerning irregular migration trends and forecasts”. The agency was offering the winning bidder up to €400,000 for “improved risk analysis regarding future irregular migratory movements” and support of Frontex’s anti-immigration operations.

    Frontex “wants to embrace” opportunities arising from the rapid growth of social media platforms, a contracting document outlined. The border agency believes that social media interactions drastically change the way people plan their routes, and thus examining would-be migrants’ online behaviour could help it get ahead of the curve, since these interactions typically occur “well before persons reach the external borders of the EU”.

    Frontex asked bidders to develop lists of key words that could be mined from platforms like Twitter, Facebook, Instagram and YouTube. The winning company would produce a monthly report containing “predictive intelligence ... of irregular flows”.

    Early this year, however, Frontex cancelled the opportunity. It followed swiftly on from another shutdown; Frontex’s sister agency, the European Asylum Support Office (EASO), had fallen foul of the European data protection watchdog, the EDPS, for searching social media content from would-be migrants.

    The EASO had been using the data to flag “shifts in asylum and migration routes, smuggling offers and the discourse among social media community users on key issues – flights, human trafficking and asylum systems/processes”. The search covered a broad range of languages, including Arabic, Pashto, Dari, Urdu, Tigrinya, Amharic, Edo, Pidgin English, Russian, Kurmanji Kurdish, Hausa and French.

    Although the EASO’s mission, as its name suggests, is centred around support for the asylum system, its reports were widely circulated, including to organisations that attempt to limit illegal immigration – Europol, Interpol, member states and Frontex itself.

    In shutting down the EASO’s social media monitoring project, the watchdog cited numerous concerns about process, the impact on fundamental rights and the lack of a legal basis for the work.

    “This processing operation concerns a vast number of social media users,” the EDPS pointed out. Because EASO’s reports are read by border security forces, there was a significant risk that data shared by asylum seekers to help others travel safely to Europe could instead be unfairly used against them without their knowledge.

    Social media monitoring “poses high risks to individuals’ rights and freedoms,” the regulator concluded in an assessment it delivered last November. “It involves the use of personal data in a way that goes beyond their initial purpose, their initial context of publication and in ways that individuals could not reasonably anticipate. This may have a chilling effect on people’s ability and willingness to express themselves and form relationships freely.”

    EASO told the Bureau that the ban had “negative consequences” on “the ability of EU member states to adapt the preparedness, and increase the effectiveness, of their asylum systems” and also noted a “potential harmful impact on the safety of migrants and asylum seekers”.

    Frontex said that its social media analysis tender was cancelled after new European border regulations came into force, but added that it was considering modifying the tender in response to these rules.
    Coronavirus

    Drug shortages put worst-hit Covid-19 patients at risk
    European doctors running low on drugs needed to treat Covid-19 patients
    Big Tobacco criticised for ’coronavirus publicity stunt’ after donating ventilators

    The two shutdowns represented a stumbling block for efforts to track population movements via new technologies and sources of data. But the public health crisis precipitated by the Covid-19 virus has brought such efforts abruptly to wider attention. In doing so it has cast a spotlight on a complex knot of issues. What information is personal, and legally protected? How does that protection work? What do concepts like anonymisation, privacy and consent mean in an age of big data?
    The shape of things to come

    International humanitarian organisations have long been interested in whether they can use nontraditional data sources to help plan disaster responses. As they often operate in inaccessible regions with little available or accurate official data about population sizes and movements, they can benefit from using new big data sources to estimate how many people are moving where. In particular, as well as using social media, recent efforts have sought to combine insights from mobile phones – a vital possession for a refugee or disaster survivor – with images generated by “Earth observation” satellites.

    “Mobiles, satellites and social media are the holy trinity of movement prediction,” said Linnet Taylor, professor at the Tilburg Institute for Law, Technology and Society in the Netherlands, who has been studying the privacy implications of such new data sources. “It’s the shape of things to come.”

    As the devastating impact of the Syrian civil war worsened in 2015, Europe saw itself in crisis. Refugee movements dominated the headlines and while some countries, notably Germany, opened up to more arrivals than usual, others shut down. European agencies and tech companies started to team up with a new offering: a migration hotspot predictor.

    Controversially, they were importing a concept drawn from distant catastrophe zones into decision-making on what should happen within the borders of the EU.

    “Here’s the heart of the matter,” said Nathaniel Raymond, a lecturer at the Yale Jackson Institute for Global Affairs who focuses on the security implications of information communication technologies for vulnerable populations. “In ungoverned frontier cases [European data protection law] doesn’t apply. Use of these technologies might be ethically safer there, and in any case it’s the only thing that is available. When you enter governed space, data volume and ease of manipulation go up. Putting this technology to work in the EU is a total inversion.”
    “Mobiles, satellites and social media are the holy trinity of movement prediction”

    Justin Ginnetti, head of data and analysis at the Internal Displacement Monitoring Centre in Switzerland, made a similar point. His organisation monitors movements to help humanitarian groups provide food, shelter and aid to those forced from their homes, but he casts a skeptical eye on governments using the same technology in the context of migration.

    “Many governments – within the EU and elsewhere – are very interested in these technologies, for reasons that are not the same as ours,” he told the Bureau. He called such technologies “a nuclear fly swatter,” adding: “The key question is: What problem are you really trying to solve with it? For many governments, it’s not preparing to ‘better respond to inflow of people’ – it’s raising red flags, to identify those en route and prevent them from arriving.”
    Eye in the sky

    A key player in marketing this concept was the European Space Agency (ESA) – an organisation based in Paris, with a major spaceport in French Guiana. The ESA’s pitch was to combine its space assets with other people’s data. “Could you be leveraging space technology and data for the benefit of life on Earth?” a recent presentation from the organisation on “disruptive smart technologies” asked. “We’ll work together to make your idea commercially viable.”

    By 2016, technologists at the ESA had spotted an opportunity. “Europe is being confronted with the most significant influxes of migrants and refugees in its history,” a presentation for their Advanced Research in Telecommunications Systems Programme stated. “One burning issue is the lack of timely information on migration trends, flows and rates. Big data applications have been recognised as a potentially powerful tool.” It decided to assess how it could harness such data.

    The ESA reached out to various European agencies, including EASO and Frontex, to offer a stake in what it called “big data applications to boost preparedness and response to migration”. The space agency would fund initial feasibility stages, but wanted any operational work to be jointly funded.

    One such feasibility study was carried out by GMV, a privately owned tech group covering banking, defence, health, telecommunications and satellites. GMV announced in a press release in August 2017 that the study would “assess the added value of big data solutions in the migration sector, namely the reduction of safety risks for migrants, the enhancement of border controls, as well as prevention and response to security issues related with unexpected migration movements”. It would do this by integrating “multiple space assets” with other sources including mobile phones and social media.

    When contacted by the Bureau, a spokeswoman from GMV said that, contrary to the press release, “nothing in the feasibility study related to the enhancement of border controls”.

    In the same year, the technology multinational CGI teamed up with the Dutch Statistics Office to explore similar questions. They started by looking at data around asylum flows from Syria and at how satellite images and social media could indicate changes in migration patterns in Niger, a key route into Europe. Following this experiment, they approached EASO in October 2017. CGI’s presentation of the work noted that at the time EASO was looking for a social media analysis tool that could monitor Facebook groups, predict arrivals of migrants at EU borders, and determine the number of “hotspots” and migrant shelters. CGI pitched a combined project, co-funded by the ESA, to start in 2019 and expand to serve more organisations in 2020.
    The proposal was to identify “hotspot activities”, using phone data to group individuals “according to where they spend the night”

    The idea was called Migration Radar 2.0. The ESA wrote that “analysing social media data allows for better understanding of the behaviour and sentiments of crowds at a particular geographic location and a specific moment in time, which can be indicators of possible migration movements in the immediate future”. Combined with continuous monitoring from space, the result would be an “early warning system” that offered potential future movements and routes, “as well as information about the composition of people in terms of origin, age, gender”.

    Internal notes released by EASO to the Bureau show the sheer range of companies trying to get a slice of the action. The agency had considered offers of services not only from the ESA, GMV, the Dutch Statistics Office and CGI, but also from BIP, a consulting firm, the aerospace group Thales Alenia, the geoinformation specialist EGEOS and Vodafone.

    Some of the pitches were better received than others. An EASO analyst who took notes on the various proposals remarked that “most oversell a bit”. They went on: “Some claimed they could trace GSM [ie mobile networks] but then clarified they could do it for Venezuelans only, and maybe one or two countries in Africa.” Financial implications were not always clearly provided. On the other hand, the official noted, the ESA and its consortium would pay 80% of costs and “we can get collaboration on something we plan to do anyway”.

    The features on offer included automatic alerts, a social media timeline, sentiment analysis, “animated bubbles with asylum applications from countries of origin over time”, the detection and monitoring of smuggling sites, hotspot maps, change detection and border monitoring.

    The document notes a group of services available from Vodafone, for example, in the context of a proposed project to monitor asylum centres in Italy. The proposal was to identify “hotspot activities”, using phone data to group individuals either by nationality or “according to where they spend the night”, and also to test if their movements into the country from abroad could be back-tracked. A tentative estimate for the cost of a pilot project, spread over four municipalities, came to €250,000 – of which an unspecified amount was for “regulatory (privacy) issues”.

    Stumbling blocks

    Elsewhere, efforts to harness social media data for similar purposes were proving problematic. A September 2017 UN study tried to establish whether analysing social media posts, specifically on Twitter, “could provide insights into ... altered routes, or the conversations PoC [“persons of concern”] are having with service providers, including smugglers”. The hypothesis was that this could “better inform the orientation of resource allocations, and advocacy efforts” - but the study was unable to conclude either way, after failing to identify enough relevant data on Twitter.

    The ESA pressed ahead, with four feasibility studies concluding in 2018 and 2019. The Migration Radar project produced a dashboard that showcased the use of satellite imagery for automatically detecting changes in temporary settlement, as well as tools to analyse sentiment on social media. The prototype received positive reviews, its backers wrote, encouraging them to keep developing the product.

    CGI was effusive about the predictive power of its technology, which could automatically detect “groups of people, traces of trucks at unexpected places, tent camps, waste heaps and boats” while offering insight into “the sentiments of migrants at certain moments” and “information that is shared about routes and motives for taking certain routes”. Armed with this data, the company argued that it could create a service which could predict the possible outcomes of migration movements before they happened.

    The ESA’s other “big data applications” study had identified a demand among EU agencies and other potential customers for predictive analyses to ensure “preparedness” and alert systems for migration events. A package of services was proposed, using data drawn from social media and satellites.

    Both projects were slated to evolve into a second, operational phase. But this seems to have never become reality. CGI told the Bureau that “since the completion of the [Migration Radar] project, we have not carried out any extra activities in this domain”.

    The ESA told the Bureau that its studies had “confirmed the usefulness” of combining space technology and big data for monitoring migration movements. The agency added that its corporate partners were working on follow-on projects despite “internal delays”.

    EASO itself told the Bureau that it “took a decision not to get involved” in the various proposals it had received.

    Specialists found a “striking absence” of agreed upon core principles when using the new technologies

    But even as these efforts slowed, others have been pursuing similar goals. The European Commission’s Knowledge Centre on Migration and Demography has proposed a “Big Data for Migration Alliance” to address data access, security and ethics concerns. A new partnership between the ESA and GMV – “Bigmig" – aims to support “migration management and prevention” through a combination of satellite observation and machine-learning techniques (the company emphasised to the Bureau that its focus was humanitarian). And a consortium of universities and private sector partners – GMV among them – has just launched a €3 million EU-funded project, named Hummingbird, to improve predictions of migration patterns, including through analysing phone call records, satellite imagery and social media.

    At a conference in Berlin in October 2019, dozens of specialists from academia, government and the humanitarian sector debated the use of these new technologies for “forecasting human mobility in contexts of crises”. Their conclusions raised numerous red flags. They found a “striking absence” of agreed upon core principles. It was hard to balance the potential good with ethical concerns, because the most useful data tended to be more specific, leading to greater risks of misuse and even, in the worst case scenario, weaponisation of the data. Partnerships with corporations introduced transparency complications. Communication of predictive findings to decision makers, and particularly the “miscommunication of the scope and limitations associated with such findings”, was identified as a particular problem.

    The full consequences of relying on artificial intelligence and “employing large scale, automated, and combined analysis of datasets of different sources” to predict movements in a crisis could not be foreseen, the workshop report concluded. “Humanitarian and political actors who base their decisions on such analytics must therefore carefully reflect on the potential risks.”

    A fresh crisis

    Until recently, discussion of such risks remained mostly confined to scientific papers and NGO workshops. The Covid-19 pandemic has brought it crashing into the mainstream.

    Some see critical advantages to using call data records to trace movements and map the spread of the virus. “Using our mobile technology, we have the potential to build models that help to predict broadly how the virus might move,” an O2 spokesperson said in March. But others believe that it is too late for this to be useful. The UK’s chief scientific officer, Patrick Vallance, told a press conference in March that using this type of data “would have been a good idea in January”.

    Like the 2015 refugee crisis, the global emergency offers an opportunity for industry to get ahead of the curve with innovative uses of big data. At a summit in Downing Street on 11 March, Dominic Cummings asked tech firms “what [they] could bring to the table” to help the fight against Covid-19.

    Human rights advocates worry about the longer term effects of such efforts, however. “Right now, we’re seeing states around the world roll out powerful new surveillance measures and strike up hasty partnerships with tech companies,” Anna Bacciarelli, a technology researcher at Amnesty International, told the Bureau. “While states must act to protect people in this pandemic, it is vital that we ensure that invasive surveillance measures do not become normalised and permanent, beyond their emergency status.”

    More creative methods of surveillance and prediction are not necessarily answering the right question, others warn.

    “The single largest determinant of Covid-19 mortality is healthcare system capacity,” said Sean McDonald, a senior fellow at the Centre for International Governance Innovation, who studied the use of phone data in the west African Ebola outbreak of 2014-5. “But governments are focusing on the pandemic as a problem of people management rather than a problem of building response capacity. More broadly, there is nowhere near enough proof that the science or math underlying the technologies being deployed meaningfully contribute to controlling the virus at all.”

    Legally, this type of data processing raises complicated questions. While European data protection law - the GDPR - generally prohibits processing of “special categories of personal data”, including ethnicity, beliefs, sexual orientation, biometrics and health, it allows such processing in a number of instances (among them public health emergencies). In the case of refugee movement prediction, there are signs that the law is cracking at the seams.
    “There is nowhere near enough proof that the science or math underlying the technologies being deployed meaningfully contribute to controlling the virus at all.”

    Under GDPR, researchers are supposed to make “impact assessments” of how their data processing can affect fundamental rights. If they find potential for concern they should consult their national information commissioner. There is no simple way to know whether such assessments have been produced, however, or whether they were thoroughly carried out.

    Researchers engaged with crunching mobile phone data point to anonymisation and aggregation as effective tools for ensuring privacy is maintained. But the solution is not straightforward, either technically or legally.

    “If telcos are using individual call records or location data to provide intel on the whereabouts, movements or activities of migrants and refugees, they still need a legal basis to use that data for that purpose in the first place – even if the final intelligence report itself does not contain any personal data,” said Ben Hayes, director of AWO, a data rights law firm and consultancy. “The more likely it is that the people concerned may be identified or affected, the more serious this matter becomes.”

    More broadly, experts worry that, faced with the potential of big data technology to illuminate movements of groups of people, the law’s provisions on privacy begin to seem outdated.

    “We’re paying more attention now to privacy under its traditional definition,” Nathaniel Raymond said. “But privacy is not the same as group legibility.” Simply put, while issues around the sensitivity of personal data can be obvious, the combinations of seemingly unrelated data that offer insights about what small groups of people are doing can be hard to foresee, and hard to mitigate. Raymond argues that the concept of privacy as enshrined in the newly minted data protection law is anachronistic. As he puts it, “GDPR is already dead, stuffed and mounted. We’re increasing vulnerability under the colour of law.”

    https://www.thebureauinvestigates.com/stories/2020-04-28/monitoring-being-pitched-to-fight-covid-19-was-first-tested-o
    #cobaye #surveillance #réfugiés #covid-19 #coronavirus #test #smartphone #téléphones_portables #Frontex #frontières #contrôles_frontaliers #Shin_Bet #internet #big_data #droits_humains #réseaux_sociaux #intelligence_prédictive #European_Asylum_Support_Office (#EASO) #EDPS #protection_des_données #humanitaire #images_satellites #technologie #European_Space_Agency (#ESA) #GMV #CGI #Niger #Facebook #Migration_Radar_2.0 #early_warning_system #BIP #Thales_Alenia #EGEOS #complexe_militaro-industriel #Vodafone #GSM #Italie #twitter #détection #routes_migratoires #systèmes_d'alerte #satellites #Knowledge_Centre_on_Migration_and_Demography #Big_Data for_Migration_Alliance #Bigmig #machine-learning #Hummingbird #weaponisation_of_the_data #IA #intelligence_artificielle #données_personnelles

    ping @etraces @isskein @karine4 @reka

    signalé ici par @sinehebdo :
    https://seenthis.net/messages/849167

  • The Judge Statistical Data Ban – My Story – Michaël Benesty – Artificial Lawyer
    https://www.artificiallawyer.com/2019/06/07/the-judge-statistical-data-ban-my-story-michael-benesty

    The basic issue was that some judges had a very high asylum rejection ratio (close to 100%, with hundreds of cases per year), while others from the same court had a very low ratio, and in France cases are randomly distributed among judges from the same courts (there is no judge specialised in Moroccan asylum and the other in Chinese asylum for instance).

    Basically, we believed there was no reasonable explanation to such discrepancies, which were stable year afters year.

    #droit #machine-learning #biais #asile #france #juges

    • #rapport_Cadiet #open_data #justice

      sur ce sujet, et d’autres (notation des avocats,…) un point de vue détaillé et argumenté, avec des options affirmées,…

      Notation des Avocats, algorithmes et Open Data des décisions de justice : les liaisons dangereuses. Par Fabien Drey, Avocat.
      https://www.village-justice.com/articles/notation-des-avocats-algorithmes-open-data-des-decisions-justice-le

      II. Algorithmes partout, justice nulle part.
      […]
      A. Le véritable Open Data doit être une réalité.
      […]
      1. Nous sommes en faveur de l’Open Data des décisions de justice.
      […]
      2. Nous sommes pour le développement d’outils de recherches poussés.
      […]
      3. Nous sommes pour la transparence totale et le contrôle des algorithmes.
      […]
      B. L’Open Data des décisions de justice doit être régulé.
      […]
      1. Pour une interdiction des traitements statistiques des décisions de justice.
      […]
      2. Pour l’interdiction de l’automatisation des décisions de justice.
      […]
      C. Le « traçage » des Avocats doit être interdit.
      […]
      1. Pour une interdiction du traçage des Avocats (si, les Avocats veulent rester anonymes.)
      […]
      2. Traçage et notation des Avocats, la fin de l’Etat de Droit ?
      […]
      III. Et maintenant, que fait-on ?

    • et ce passage, plus particulièrement sur les traitements,…

      Le premier constat fait par nos confrères réside dans le fait que la justice est « rendue publiquement ». Nous ne pouvons à cet égard qu’être d’accord avec une telle affirmation, bien que de nombreuses exceptions soient prévues.

      Cependant, le caractère public des décisions de justice ne signifie pas que l’ensemble des données issues des décisions de justice doivent être rendues publiques.

      Il y a à notre sens une large différence entre le fait d’ouvrir les portes d’un tribunal (ce qui permet d’assister au procès, d’entendre les plaidoiries, d’apprécier la personnalité des personnes en cause, etc.) et publier sur Internet le contenu aride de l’ensemble des décisions de justice…

      En effet, cette large publication a un triple effet pervers :
      • D’une part, devant la masse d’information à traiter, aucun être humain ne serait capable de lire l’ensemble des décisions publiées sur un sujet, il est donc nécessaire d’être assisté d’un algorithme permettant de trier et synthétiser la ou les décisions recherchées. Or, comment nous l’avons vu, rien ne permet de dire que l’algorithme n’est pas orienté ou imparfait. Dans ce cadre, la publication n’est que de façade et nos recherches sont plus orientées par l’algorithme que par la véritable recherche d’un point juridique ;
      • D’autre part, la masse des informations publiées oblige à effectuer des statistiques. La masse des décisions publiées nous oblige donc à faire confiance aux taux calculés par l’algorithme. Or, qui est en mesure de véritablement vérifier la pertinence de ces calculs ?
      • Enfin, l’automatisation de nos recherches et la synthétisation qui en est faite nous amène à penser et réagir comme des algorithmes, en fonction simplement du passé et de ce qui a été déjà été fait. Dans ce cadre, quelle place aura la création ? Où se placera l’innovation ?

      Nous rappelons à cet égard que l’innovation repose sur le fait de s’affranchir de toute règle, de ne pas raisonner en fonction du passé mais par l’intermédiaire d’un nouveau système, voire de raisonner par l’absurde… bref, tout le contraire du deep learning…

      A trop vouloir analyser le passé pour essayer de prédire les décisions futures, nous en perdrons notre âme et ce qui a toujours fait le sel de notre profession, à savoir l’audace et l’ingéniosité.

      Le rapport Cadiet avait là-encore cerné ce problème, faisant de la transparence des algorithmes un enjeu majeur.

      Or, force est de constater que nous en sommes encore loin.

      La publication de l’ensemble des décisions de justice ne pose pas véritablement problème, ce sont les traitements issus de ces publications qui peuvent poser d’énormes difficultés, notamment en ce qui concerne l’identité des Avocats.

    • nous en perdrons notre âme et ce qui a toujours fait le sel de notre profession, à savoir l’audace et l’ingéniosité

      C’est du grand art, ça me rappelle la complainte des génies de la chirurgie esthétique.

      Or, qui est en mesure de véritablement vérifier la pertinence de ces calculs ?

      On pourrait croire que les avocats seraient sensibles à la notion d’un débat entre argumentation et contre-argumentation, au service de la justice. Apparemment pas :)

  • How Machine Learning Can Revolutionize Subscription Billing
    https://hackernoon.com/how-machine-learning-can-revolutionize-subscription-billing-178a301238f3

    A subscription business gives the predictability of a stable cash flow, which helps a company grow and make plans for the future. Some organizations base their entire business on subscriptions, for example, cable TV or SaaS providers, while others have this only as one of the product licensing options.Since this is a successful business model, managers are trying to identify new ways to prevent customer churn, decrease the cost of customer acquisition and find the best ways to structure prices and plans. Until now, marketing research was the primary tool to answer these questions, but machine learning (ML) is becoming more effective.How Does Machine Learning Work?Machine learning is all about making a system recognize patterns by using vast amounts of training data. Once the system (...)

    #analytics #subscription-billing #machine-learning #startup #big-data

  • 28 Years Ago, Our Enslavement was Predicted — and We’re Still Not Listening
    https://hackernoon.com/28-years-ago-it-was-predicted-and-we-are-not-listening-652c0be46df7?sour

    28 Years Ago, Our Enslavement was Predicted — and We’re Still Not Listening“An untold #future lies ahead, and for the first time, I face it with a sense of hope. For if a machine, a Terminator, can learn the value of human life; maybe we can too.” — Sarah ConnorI was a kid when I first heard this quote, and It was shocking how we could even think of machines having feelings. Even more, it was interesting the idea in that quote referring that human beings don’t appreciate life.At that moment I wasn’t a big fan of terminator and to be honest I’m still not the biggest follower, yet, that bit of the end of the movie was nailed into my memory even -especially these days- several years after.Some years after Terminator, the first movie of The Matrix was released and from the very first time, I was hooked (...)

    #smartphone-addiction #machine-takeover #giveashit #technology

  • A beginner’s guide to Deep Learning Applications in Medical Imaging
    https://hackernoon.com/a-beginners-guide-to-deep-learning-applications-in-medical-imaging-7aa3b

    Let us first understand what medical imaging is before we delve into how deep learning and other similar expert systems can help medical professional such as radiologists in diagnosing their patients.This is how Wikipedia defines Medical Imaging:Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be (...)

    #keras #deep-learning #artificial-intelligence #medicine #machine-learning

  • Big Data and Machine Learning with Nick Caldwell
    https://hackernoon.com/big-data-and-machine-learning-with-nick-caldwell-14ed702b1c64?source=rss

    Episode 39 of the Hacker Noon Podcast: An interview with Nick Caldwell CPO at Looker and former VP of #engineering at Reddit.Listen to the interview on iTunes, or Google Podcast, or watch on YouTube.In this episode Trent Lapinski interviews Nick Caldwell from Looker, you get to learn about big data, machine learning and AI.“Modern data stores are extremely powerful. You can put tons and tons of data into them. You can query them without losing speed. And in some cases, you can even do analytics in the database. We’re just seeing this trend where the data layer is becoming more and more powerful, and Looker is riding that trend.”“My favorite learning, again, was just what’s going on in the data engineering space. The BigQuery to me, at that time, was just mind blowing. You dump 4–5 petabytes of (...)

    #artificial-intelligence #machine-learning #big-data

  • 12 Key Lessons from ML researchers and practitioners
    https://hackernoon.com/12-key-lessons-from-ml-researchers-and-practitioners-3d4818a2feff?source

    Machine learning algorithms come with the promise of being able to figure out how to perform important tasks by learning from data, i.e., generalizing from examples without being explicitly told what to do. This means that the higher the amount of data, the more ambtious problems can be tackled by these algorithms. However, developing successful machine learning applications requires quite some “black art” that is hard to find in text books or introductory courses on machine learning.I recently stumbled upon a great research paper by Professor Pedro Domingos that puts together lessons learned by machine learning researchers and practitioners. In this post, I am going to walkthrough those lessons with you.Get ready to learn about: pitfalls to avoid, important issues to focus on, and (...)

    #machine-learning #development #best-practices #artificial-intelligence #data-science

  • Top Five Trends In Emerging Technology Right Now
    https://hackernoon.com/top-five-trends-in-emerging-technology-right-now-5b06de440e41?source=rss

    We have arguably seen a great level of creative diversity and institutional support over the last half-decade with regards to emerging and disruptive technologies, compared to the five years before.Here are a few of the most important & disruptive trends in emerging technology as of publication.#5: The Virtual Alternatives To RealityAlternative Reality and Virtual Reality are two fantastic ideas that were conceived by 20th-century science fiction writers, yet have only become commercially viable over the past decade.AR & VR are expected to grow by $20.4 billion worldwide revenue for 2019.Interactive entertainment has been, and is currently, being produced on a commercial scale for pioneering VR devices such as the HTC Vive and Oculus Rift (the latter company being acquired by (...)

    #emerging-technology #machine-learning #blockchain #virtual-reality #internet-of-things

  • How Will Machine Learning Impact Mobile Apps?
    https://hackernoon.com/how-will-machine-learning-impact-mobile-apps-644d72ef2ab5?source=rss----

    You might have heard about the term “Machine Learning”. It is basically an application of Artificial Intelligence which enables computers and software to learn and envision outcomes automatically without the interference of human being.Machine learning has already served in various fields & today is the time to talk about how it is serving to mobile application development.It is true that whenever the latest technology comes, people find it difficult to handle. But when Mobile applications adopt every technology providing them in various apps, it becomes familiar with the people easily as people are very used to the mobiles. So, it could be said that gradually people become familiar with technologies.As every new technology ends with easy to handle and use. In this regard Corrado, a (...)

    #ai-in-mobile-app #artficial-intelligence #machine-learning-in-app #machine-learning #mobile-app-development

  • Our 25 Favorite Data Science Courses From Harvard To Udemy
    https://hackernoon.com/our-25-favorite-data-science-courses-from-harvard-to-udemy-9a89cac0358d?

    Originally Posted HereLearning every facet of data science takes time. We have written pieces on different resources before. But we really wanted to focus on courses, or video like courses on youtube.There are so many options, it can be nice to have a list of classes worth taking.We are going to start with the free data science options so you can decide whether or not you want to start investing more in courses.Tip : Coursera can make it seem like the only option is to purchase the course. But they do have an audit button on the very bottom. Now, if you appreciate Coursera, by all means, you should purchase their specialization, I am still uncertain how I feel about it. But, I do love taking Coursera courses.Select the audit course option to not pay for the courseBootcamps and (...)

    #data-science #big-data #python #machine-learning #learning

  • Build an Abstractive Text Summarizer in 94 Lines of #tensorflow !! (Tutorial 6)
    https://hackernoon.com/build-an-abstractive-text-summarizer-in-94-lines-of-tensorflow-tutorial-

    Build an Abstractive Text Summarizer in 94 Lines of Tensorflow !! (Tutorial 6)This tutorial is the sixth one from a series of tutorials that would help you build an abstractive text summarizer using tensorflow , today we would build an abstractive text summarizer in tensorflow in an optimized way .Today we would go through one of the most optimized models that has been built for this task , this model has been written by dongjun-Lee , this is the link to his model , I have used his model model on different datasets (in different languages) and it resulted in truly amazing results , so I would truly like to thank him for his effortI have made multiple modifications to the model to enable it to enable it to run seamlessly on google colab (link to my model) , and i have hosted the data onto (...)

    #machine-learning #nlp #ai #deep-learning

  • Using managed machine learning services (MLaaS) as your baseline
    https://hackernoon.com/using-managed-machine-learning-services-mlaas-as-your-baseline-e6c239d3f

    Build versus Buy: does MLaaS fit your data science project’s needs and how do you evaluate across vendors?Making a build or buy decision at the start of any data science project can seem daunting — let’s review aAlmost every major cloud provider now offers a custom machine learning service— from Google Cloud’s AutoML Vision Beta, to Microsoft Azure’s Custom Vision Preview, and IBM Watson’s Visual Recognition service, the field of computer vision is no exception.Perhaps your team has been in this Build or Buy predicament?From the marketing perspective, these managed ML services are positioned for companies that are just building up their data science teams or whose teams are primarily composed of data analysts, BI specialists, or software engineers (who might be transitioning to data (...)

    #enterprise-software #alma #google-cloud-platform #machine-learning #computer-vision

  • Computer Vision: The #future of the Future in More Ways Than One
    https://hackernoon.com/computer-vision-the-future-of-the-future-in-more-ways-than-one-3079e741a

    As computer vision expands its influence in the human world, there are many things to consider in regard to how it will change the way we view our lives and how we actually live it. We look now at just a few of the advances computer vision has given usSource: dribbble.comSky’s The LimitAll around us — and most of the time without us even realizing it — computer vision (CV) is being used to enhance our lives. With our iPhones and its Face ID #technology to unlock your smartphone as a case in point, not to mention the countless other services and apps that have pooped up on the market of late, we’re headed in the right direction as far as innovation is concerned.Technology is progressing at an unbelievable pace.Things that were only a dream in 2010 are now the de facto reality. The algorithms of (...)

    #machine-learning #artificial-intelligence #computer-vision

  • How to prevent embarrassment in AI
    https://hackernoon.com/how-to-prevent-embarrassment-in-ai-5e64f437b9bb?source=rss----3a8144eabf

    The must-have safety net that’ll save your baconHow will you prevent embarrassment in machine learning? The answer is… partially.Expect the unexpected!Wise product managers and designers might save your skin by seeing some issues coming a mile off and helping you cook a preventative fix into your production code. Unfortunately, AI systems are complex and your team usually won’t think of everything.There will be nasty surprises that force you into reactive mode.Real life is like that too. I’m meticulous when planning my vacations, but I didn’t consider the possibility that I’d miss my train to Rome thanks to a hospital tour sponsored by shellfish poisoning. True story. It taught college-age me never to repeat the words “I’ve thought of everything.”Speaking of things nobody expects…When the (...)

    #data-science #artificial-intelligence #hackernoon-top-story #machine-learning #technology

  • 10 Open Source #ai Project Ideas For Startups
    https://hackernoon.com/10-open-source-ai-project-ideas-for-startups-1afda6fb0aa8?source=rss----

    The open source AI projects particularly pay attention to deep learning, machine learning, neural network and other applications that are extending the use of AI.Those involved in deep researches have always had the goal of building machines capable of thinking like human beings.For the last few years, computer scientists have made unbelievable progress in Artificial Intelligence (AI) to this extent that the interest in AI project ideas keeps increasing among technology enthusiasts.As per Gartner’s prediction, Artificial Intelligence technologies going to be virtually prevalent in nearly all new software products and services.The contribution of open source software development to the rise of Artificial Intelligence is immeasurable. And, innumerable top machine learning, deep learning, (...)

    #startup #business #open-source #machine-learning

  • 10 Top Open Source AI Technologies For Startups
    https://hackernoon.com/10-top-open-source-ai-technologies-for-startups-7c5f10b82fb1?source=rss-

    In the area of technology research, Artificial intelligence is one of the hottest trends. In fact, many startups have already made progress in areas like natural language, neural networks, AI, machine learning and image processing. Many other big companies like Google, Microsoft, IBM, Amazon and Facebook are heavily investing in their own R&D.Hence, it is no surprise now AI applications are increasingly useful for small as well as large businesses in 2019. In this blog, I have listed top 10 open source AI Technologies for small businesses and startups.1) Apache SystemMLIt is the machine learning technology created at IBM that has reached one of the top-level project levels in the Apache Software Foundation and is a flexible and scalable machine learning system. The important (...)

    #machine-learning #artificial-intelligence #open-source #startup #open-source-ai

  • Top 5 Machine Learning Projects for Beginners
    https://hackernoon.com/top-5-machine-learning-projects-for-beginners-47b184e7837f?source=rss---

    Purchased Image designed by PlargueDoctorAs a beginner, jumping into a new machine learning project can be overwhelming. The whole process starts with picking a data set, and second of all, study the data set in order to find out which machine learning algorithm class or type will fit best on the set of data.Here are some tips from experts on how to get started:Find a modestly sized data set which is relatively easy to analyze. Good places to search are the UCI ML Repository and Kaggle.Experiment with the data set. To get a good “feeling” with the data set, you can run several top machine learning algorithms on the data to see how it behaves and what performance each algorithm achieves.Pick the algorithm with the best performance and tune it accordingly.Ok, now we are packed with a couple (...)

    #deep-learning #machine-learning #artificial-intelligence #ai #programming

  • How Can AI Change #gaming Experience
    https://hackernoon.com/how-can-ai-change-gaming-experience-ed0741b9f51e?source=rss----3a8144eab

    We will see characters that can learn and adapt to the player.Photo by Mali MaederWhether you know it or not, you use artificial intelligence all the time. Maybe you own a smart speaker, you’ve seen a self-driving car, or you’ve used Google Photos to search for images of your cat.Now, there’s also a good chance you’ve played a video game that happens to have some AI in it, like God of War or Red Dead Redemption 2. What may surprise you is that those two types of AI are not the same thing.The AI in digital systems and autonomous vehicles is self-learning and really fast, but it’s also really unpredictable. Yet these two worlds are fast colliding, and once game developers have the right tools and the freedom to make games that really push the limits of AI, the results are going to be the stuff (...)

    #machine-learning #technology #artificial-intelligence #future

  • A Beginners Guide to Federated Learning
    https://hackernoon.com/a-beginners-guide-to-federated-learning-b29e29ba65cf?source=rss----3a814

    We predict growth of Federated Learning, a new framework for Artificial Intelligence (AI) model development that is distributed over millions of mobile devices. Federated Learning models are hyper personalized for an user, involve minimum latencies, low infra overheads and are privacy preserved by design. This article is a beginner level primer for Federated Learning.Disclaimer: the author is an investor and advisor in the Federated Learning startup S20.ai. In case you are wondering, S20 stands for “Software 2.0”.The AI market is dominated by tech giants such as #google, Amazon and Microsoft, offering cloud-based AI solutions and APIs. In the traditional AI methods sensitive user data are sent to the servers where models are trained.Recently we are seeing the beginning of a decentralized (...)

    #federated-learning #artificial-intelligence #machine-learning #cloud-computing

  • 10 Great Articles On Data Science And Data Engineering
    https://hackernoon.com/10-great-articles-on-data-science-and-data-engineering-d5abdf4a4a44?sour

    Data science and #programming are such rapidly expanding specialities it is hard to keep up with all the articles that come out from Google, Uber, Netflix and one off engineers. We have been reading several over the past few weeks and wanted to share some of our top blog posts for this week April 2019!We hope you enjoy these articles.Building and Scaling Data Lineage at NetflixBy: Di Lin, Girish Lingappa, Jitender AswaniImagine yourself in the role of a data-inspired decision maker staring at a metric on a dashboard about to make a critical business decision but pausing to ask a question — “Can I run a check myself to understand what data is behind this metric?”Now, imagine yourself in the role of a software engineer responsible for a micro-service which publishes data consumed by few critical (...)

    #python #big-data #machine-learning #data-science

  • ML.NET: Machine Learning framework by Microsoft for .NET developers
    https://hackernoon.com/ml-net-machine-learning-framework-by-microsoft-for-net-developers-3c6f46

    ML.NET: Machine Learning framework by Microsoft for .NET developersWhenever you think of data science and machine learning, the only two programming languages that pop up on your mind are Python and R. But, the question arises, what if the developer has knowledge of other languages than these?We have a solution in the form of Microsoft’s recently introduced build 2018 of its own version of the machine learning framework especially for .NET and C# developers. The framework is open source and cross-platform and can also run on Windows, Linux, and macOS.The developers always wanted to have a NuGet package which they can plug in with a. Net application for creating machine learning applications. After the release of the first version,ML.NET is still a baby but it is already showing the (...)

    #dotnet #dotnet-developer #mldotnet #microsoft-framework #machine-learning

  • Malicious Attacks to Neural Networks
    https://hackernoon.com/malicious-attacks-to-neural-networks-8b966793dfe1?source=rss----3a8144ea

    Adversarial Examples for Humans — An IntroductionThis article is based on a twenty-minute talk I gave for TrendMicro Philippines Decode Event 2018. It’s about how malicious people can attack deep neural networks. A trained neural network is a model; I’ll be using the terms network (short for neural network) and model interchangeably throughout this article.Deep learning in a nutshellThe basic building block of any neural network is an artificial neuron.Essentially, a neuron takes a bunch of inputs and outputs a value. A neuron gets the weighted sum of the inputs (plus a number called a bias) and feeds it to a non-linear activation function. Then, the function outputs a value that can be used as one of the inputs to other neurons.You can connect neurons in various different (usually (...)

    #artificial-intelligence #neural-networks #deep-learning #machine-learning

  • Beam Search & Attention for text summarization made easy (Tutorial 5)
    https://hackernoon.com/beam-search-attention-for-text-summarization-made-easy-tutorial-5-3b7186

    This tutorial is the fifth one from a series of tutorials that would help you build an abstractive text summarizer using tensorflow , today we would discuss some useful modification to the core RNN seq2seq model we have covered in the last tutorialThese Modifications areBeam SearchAttention ModelAbout SeriesThis is a series of tutorials that would help you build an abstractive text summarizer using tensorflow using multiple approaches , you don’t need to download the data nor do you need to run the code locally on your device , as data is found on google drive , (you can simply copy it to your google drive , learn more here) , and the code for this series is written in Jupyter notebooks to run on google colab can be found hereWe have covered so far (code for this series can be found here)0. (...)

    #nlp #ai #technology #machine-learning #deep-learning