technology:artificial intelligence

  • Les cafards sont devenus résistants à la quasi totalité des insecticides. Étude parue dans Nature diffusée sur RT

    Estudio advierte que las cucarachas se están volviendo imposibles de matar
    http://www.el-nacional.com/noticias/ciencia-tecnologia/estudio-advierte-que-las-cucarachas-estan-volviendo-imposibles-matar_28

    Probaron diferentes insecticidas en edificios de Indiana e Illinois, que alternaban cada mes. Encontraron que las poblaciones aumentaron o se mantuvieron estables

    Un grupo de investigadores expuso a cucarachas comunes a diferentes tipos de productos químicos durante seis meses, y encontraron que las poblaciones aumentaron o se mantuvieron estables.

    Las cucarachas están evolucionando rápidamente para ser resistentes a casi todo tipo de insecticida y pronto podrían ser casi imposible de matarlas solo con pesticidas, se desprende de un estudio publicado en la revista Nature difundido por el sitio web RT.

    En una búsqueda para determinar los métodos de erradicación más óptima de estos insectos, entomólogos de la Universidad Purdue de Indiana, Estados Unidos, establecieron un experimento para evaluar su resistencia a los pesticidas en generaciones sucesivas y analizaron concretamente la especie más común: la Blattella germanica, más conocida como cucaracha rubia o alemana.

  • When convenience meets #surveillance: AI at the corner store | The Seattle Times
    https://www.seattletimes.com/business/technology/when-convenience-meets-surveillance-ai-at-the-corner-store

    Before patrons can enter the basic convenience store at the corner of South 38th Street and Pacific Avenue, a camera under a red awning will take a picture and use artificial intelligence (AI) to decide whether the image matches any in a database of known robbers and shoplifters at that location.

    #ia

  • Face-Reading AI Will Tell Police When Suspects Are Hiding Truth
    https://finance.yahoo.com/news/face-reading-ai-tell-police-145927474.html

    #Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.

    “If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

    #IA #business

  • Controversial deepfake app DeepNude shuts down hours after being exposed
    https://www.theverge.com/2019/6/27/18761496/deepnude-shuts-down-deepfake-nude-ai-app-women

    Less than a day after receiving widespread attention, the deepfake app that used AI to create fake nude photos of women is shutting down. In a tweet, the team behind DeepNude said they “greatly underestimated” interest in the project and that “the probability that people will misuse it is too high.” DeepNude will no longer be offered for sale and further versions won’t be released. The team also warned against sharing the software online, saying it would be against the app’s terms of service. (...)

    #algorithme #manipulation #discrimination #harcèlement

  • A Machine May Not Take Your Job, but One Could Become Your Boss
    THe Neww York Times, 23 juin 2019, Kevin Roose
    https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html

    The goal of automation has always been efficiency. What if artificial intelligence sees humanity itself as the thing to be optimized?

    Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them real-time feedback.

    Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.)
    [The Verge’s article : https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations]

    There were no protests at MetLife’s call center. Instead, the employees I spoke with seemed to view their Cogito software as a mild annoyance at worst. Several said they liked getting pop-up notifications during their calls, although some said they had struggled to figure out how to get the “empathy” notification to stop appearing. (Cogito says the A.I. analyzes subtle differences in tone between the worker and the caller and encourages the worker to try to mirror the customer’s mood.)

    MetLife, which uses the software with 1,500 of its call center employees, says using the app has increased its customer satisfaction by 13 percent.

    ANd TheNewYorker little comment on tech :

    Using A.I. to correct for human biases is a good thing. But as more A.I. enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis. If that happens, it won’t be the robots staging an uprising.

    [emphasis is mine]

    On arrête psa le progrès. Nous sommes en 2019 et le vieil adage mortifère continue de sévir allégrement (même dans un article qui se voudrait critique..

  • A new deepfake detection tool should keep world leaders safe—for now - MIT Technology Review
    https://www.technologyreview.com/s/613846/a-new-deepfake-detection-tool-should-keep-world-leaders-safefor-no

    An AI-produced video could show Donald Trump saying or doing something extremely outrageous and inflammatory. It would be only too believable, and in a worst-case scenario it might sway an election, trigger violence in the streets, or spark an international armed conflict.

    Fortunately, a new digital forensics technique promises to protect President Trump, other world leaders, and celebrities against such deepfakes—for the time being, at least. The new method uses machine learning to analyze a specific individual’s style of speech and movement, what the researchers call a “softbiometric signature.”

    The team then used machine learning to distinguish the head and face movements that characterize the real person. These subtle signals—the way Bernie Sanders nods while saying a particular word, perhaps, or the way Trump smirks after a comeback—are not currently modeled by deepfake algorithms.

    In experiments the technique was at least 92% accurate in spotting several variations of deepfakes, including face swaps and ones in which an impersonator is using a digital puppet. It was also able to deal with artifacts in the files that come from recompressing a video, which can confuse other detection techniques. The researchers plan to improve the technique by accounting for characteristics of a person’s speech as well. The research, which was presented at a computer vision conference in California this week, was funded by Google and DARPA, a research wing of the Pentagon. DARPA is funding a program to devise better detection techniques.

    The problem facing world leaders (and everyone else) is that it has become ridiculously simple to generate video forgeries with artificial intelligence. False news reports, bogus social-media accounts, and doctored videos have already undermined political news coverage and discourse. Politicians are especially concerned that fake media could be used to sow misinformation during the 2020 presidential election.

    Some tools for catching deepfake videos have been produced already, but forgers have quickly adapted. For example, for a while it was possible to spot a deepfake by tracking the speaker’s eye movements, which tended to be unnatural in deepfakes. Shortly after this method was identified, however, deepfake algorithms were tweaked to include better blinking.

    “We are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI-based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California who helped develop the new technique. For this reason, his team has not yet released the code behind the method .

    Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually. “The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says.

    Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves. “Celebrities and political figures have been the main targets so far,” he says. “But I would not be surprised if in a year or two, artificial humans that look indistinguishable from real ones can be synthesized by any end user.”

    #fake_news #Deepfake #Video #Détection

  • Spy used AI-generated face to connect with targets
    https://mamot.fr/system/media_attachments/files/004/862/107/original/0f2b67672f8aa011.mp4

    Last month I my attention was drawn to something suspicious: A fake LinkedIn profile connecting to Washington think tank types and senior US officials.

    Even weirder: Her face itself appeared to be fake.

    https://apnews.com/bc2f19097a4c4fffaa00de6770b8a60d

    via https://social.tcit.fr/@manhack/102274512502562853

  • Deepfakes have got Congress panicking. This is what it needs to do. - MIT Technology Review
    https://www.technologyreview.com/s/613676/deepfakes-ai-congress-politics-election-facebook-social

    In response, the House of Representatives will hold its first dedicated hearing tomorrow on deepfakes, the class of synthetic media generated by AI. In parallel, Representative Yvette Clarke will introduce a bill on the same subject. A new research report released by a nonprofit this week also highlights a strategy for coping when deepfakes and other doctored media proliferate.

    The deepfake bill
    The draft bill, a product of several months of discussion with computer scientists, disinformation experts, and human rights advocates, will include three provisions. The first would require companies and researchers who create tools that can be used to make deepfakes to automatically add watermarks to forged creations.

    The second would require social-media companies to build better manipulation detection directly into their platforms. Finally, the third provision would create sanctions, like fines or even jail time, to punish offenders for creating malicious deepfakes that harm individuals or threaten national security. In particular, it would attempt to introduce a new mechanism for legal recourse if people’s reputations are damaged by synthetic media.

    “This issue doesn’t just affect politicians,” says Mutale Nkonde, a fellow at the Data & Society Research Institute and an advisor on the bill. “Deepfake videos are much more likely to be deployed against women, minorities, people from the LGBT community, poor people. And those people aren’t going to have the resources to fight back against reputational risks.”

    But the technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. Two weeks ago, Samsung demonstrated that it was possible to create an entire video out of a single photo; this week university and industry researchers demoed a new tool that allows users to edit someone’s words by typing what they want the subject to say.

    It’s thus only a matter of time before deepfakes proliferate, says Sam Gregory, the program director of Witness. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems, so we should expect people to try on the latest ways to do those effectively,” he says.

    The report outlines a strategy for how to prepare for such an impending future. Many of the recommendations and much of the supporting evidence also aligns with the proposals that will appear in the House bill.

    The report found that current investments by researchers and tech companies into deepfake generation far outweigh those into deepfake detection. Adobe, for example, has produced many tools to make media alterations easier, including a recent feature for removing objects in videos; it has not, however, provided a foil to them.

    The result is a mismatch between the real-world nature of media manipulation and the tools available to fight it. “If you’re creating a tool for synthesis or forgery that is seamless to the human eye or the human ear, you should be creating tools that are specifically designed to detect that forgery,” says Gregory. The question is how to get toolmakers to redress that imbalance.

    #Deepfake #Fake_news #Synthetic_media #Médias_de_synthèse #Projet_loi

  • Creating an AI can be five times worse for the planet than a car, by Donna Lu, on 6 June 2019, NewScientist
    https://www.newscientist.com/article/2205779-creating-an-ai-can-be-five-times-worse-for-the-planet-than-a-c

    Training artificial intelligence is an energy intensive process. New estimates suggest that the carbon footprint of training a single AI is as much as 284 tonnes of carbon dioxide equivalent – five times the lifetime emissions of an average car.

  • Training a single AI model can emit as much carbon as five cars in their lifetimes - MIT Technology Review
    https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in

    In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

    It’s a jarring quantification of something AI researchers have suspected for a long time. “While probably many of us have thought of this in an abstract, vague level, the figures really show the magnitude of the problem,” says Carlos Gómez-Rodríguez, a computer scientist at the University of A Coruña in Spain, who was not involved in the research. “Neither I nor other researchers I’ve discussed them with thought the environmental impact was that substantial.”

    They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model’s final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-American flight.

    What’s more, the researchers note that the figures should only be considered as baselines. “Training a single model is the minimum amount of work you can do,” says Emma Strubell, a PhD candidate at the University of Massachusetts, Amherst, and the lead author of the paper. In practice, it’s much more likely that AI researchers would develop a new model from scratch or adapt an existing model to a new data set, either of which can require many more rounds of training and tuning.

    The significance of those figures is colossal—especially when considering the current trends in AI research. “In general, much of the latest research in AI neglects efficiency, as very large neural networks have been found to be useful for a variety of tasks, and companies and institutions that have abundant access to computational resources can leverage this to obtain a competitive advantage,” Gómez-Rodríguez says. “This kind of analysis needed to be done to raise awareness about the resources being spent [...] and will spark a debate.”

    “What probably many of us did not comprehend is the scale of it until we saw these comparisons,” echoed Siva Reddy, a postdoc at Stanford University who was not involved in the research.
    The privatization of AI research

    The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research.

    #Intelligence_artificielle #Consommation_énergie #Empreinte_carbone

  • Can AI escape our control and destroy us?
    https://www.popsci.com/can-ai-destroy-humanity
    It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line. Fifty thousand years ago with the rise of Homo sapiens. Ten thousand years ago with the invention of civilization. Five hundred years ago with the invention of the printing press. Fifty years ago with the invention of the computer. In less than thirty years, it will end.

  • Chinese Surveillance Complex Advancing in Latin America

    In February, 2019, in a story that went almost unnoticed in Washington, the small South American nation of #Uruguay began installing the first of 2,100 surveillance cameras, donated by the People’s Republic of China to improve control of its borders with neighboring Argentina and Brazil.

    The move highlights the significant deepening of the Uruguay-PRC relationship over the last decade, including their establishment of a “Strategic Partnership” in October 2016, and the signing of a memorandum of understanding in August 2018 for Uruguay to join China’s Belt and Road initiative (despite being about as far from the PRC as is geographically possible).

    Beyond Uruguay, the development also highlights a little-discussed but important dimension of China’s advance: its expanding global sales of surveillance and control technologies. Although the press and U.S. political leadership have given significant attention to the risks of employing Chinese telecommunications companies such as Huawei the equally serious but newer issue of expanding sales of Chinese surveillance systems has been less discussed.

    The installation of Chinese surveillance systems, acquired through PRC government donations or commercial contracts, is a growing phenomenon in Latin America and elsewhere.

    Such systems began to appear in the region more than a decade ago, including in 2007, when then mayor of Mexico City (now Mexican Foreign Minister) Miguel Ebrard returned from a trip to the PRC with a deal to install thousands of Chinese cameras to combat crime in the Mexican capital. More recent examples include ECU-911 in Ecuador, a China-built national system of surveillance and communication initially agreed to by the administration of anti-U.S. populist president Rafael Correa. The system, which has expanded to currently include 4,300 cameras and a command center manned by thousands of Ecuadorans, has been built almost completely from Chinese equipment, designed for a range of otherwise noble purposes from emergency response and combatting crime, to monitoring volcanoes. Bolivia boasts a similar Chinese built system, albeit more limited in scope, BOL-110, in addition to hundreds of surveillance cameras donated by the PRC to at least four of Bolivia’s principal cities.

    In Panama, which abandoned Taiwan to establish relations with the PRC in 2017, the government of Juan Carlos Varela has agreed to allow Huawei to install a system of cameras in the crime-ridden city of Colon and the associated free trade zone. Not by coincidence, in July 2019, Hikivision, China’s largest producer of surveillance cameras, announced plans to set up a major distribution center in Colon to support sales of its products throughout the Americas.

    In northern Argentina, near where the Chinese are developing a lithium mining operation and constructing the hemisphere’s largest array of photovoltaic cells for electricity generation, the Chinese company ZTE is installing another “911” style emergency response system with 1,200 cameras.

    In Venezuela, although not a surveillance system per se, the Chinese company ZTE has helped the regime of Nicholas Maduro implement a “fatherland identity card” linking different kinds of data on individuals through an identity card which allows the state to confer privileges (such as rationing food) as a tool for social control.

    As with sectors such as computers and telecommunications, the PRC arguably wishes to support the global export of such systems by its companies to advance technologies it recognizes as strategic for the Chinese nation, per its own official policy documents such as Made In China 2025.

    The risks arising from spreading use of Chinese surveillance equipment and architectures are multiple and significant, involving: (1) the sensitivity of the data collected on specific persons and activities, particularly when processed through technologies such as facial recognition, integrated with other data, and analyzed through artificial intelligence (AI) and other sophisticated algorithms, (2) the potential ability to surreptitiously obtain access to that data, not only through the collection devices, but at any number of points as it is communicated, stored, and analyzed, and (3) the long-term potential for such systems to contribute to the sustainment of authoritarian regimes (such as those in Venezuela, Bolivia, Cuba, and formerly Ecuador) whose corrupt elites provide strategic access and commercial benefits to the Chinese state.

    The risk posed by such Chinese architectures is underestimated by simply focusing on the cameras and sensors themselves.

    Facial and other recognition technologies, and the ability to integrate data from different sensors and other sources such as smartphones enables those with access to the technology to follow the movement of individual human beings and events, with frightening implications. It includes the ability to potentially track key political and business elites, dissidents, or other persons of interest, flagging possible meetings between two or more, and the associated implications involving political or business meetings and the events that they may produce. Flows of goods or other activities around government buildings, factories, or other sites of interest may provide other types of information for political or commercial advantage, from winning bids to blackmailing compromised persons.

    While some may take assurance that the cameras and other components are safely guarded by benevolent governments or companies, the dispersed nature of the architectures, passing information, instructions, and analysis across great distances, means that the greatest risk is not physical access to the cameras, but the diversion of information throughout the process, particularly by those who built the components, databases and communication systems, and by those who wrote the algorithms (increasingly Chinese across the board).

    With respect to the political impact of such systems, while democratic governments may install them for noble purposes such as crimefighting and emergency response, and with limitations that respect individual privacy, authoritarian regimes who contract the Chinese for such technologies are not so limited, and have every incentive to use the technology to combat dissent and sustain themselves in power.

    The PRC, which continues to perfect it against its own population in places like Xinjiang (against the Uighur Muslims there), not only benefits commercially from selling the technology, but also benefits when allied dictatorships provide a testing ground for product development, and by using it to combat the opposition, keeping friends like Maduro in power, continuing to deliver the goods and access to Beijing.

    As with the debate over Huawei, whether or not Chinese companies are currently exploiting the surveillance and control systems they are deploying across Latin America to benefit the Chinese state, Chinese law (under which they operate) requires them to do so, if the PRC government so demands.

    The PRC record of systematic espionage, forced technology transfer, and other bad behavior should leave no one in Latin America comfortable that the PRC will not, at some point in the future, exploit such an enormous opportunity.

    https://www.newsmax.com/evanellis/china-surveillance-latin-america-cameras/2019/04/12/id/911484

    #Amérique_latine #Chine #surveillance #frontières #contrôles_frontaliers #Argentine #Brésil
    ping @reka

  • Siri and Alexa Reinforce Gender Bias, U.N. Finds - The New York Times
    https://www.nytimes.com/2019/05/22/world/siri-alexa-ai-gender-bias.html

    Why do most virtual assistants that are powered by artificial intelligence — like Apple’s Siri and Amazon’s Alexa system — by default have female names, female voices and often a submissive or even flirtatious style?

    The problem, according to a new report released this week by Unesco, stems from a lack of diversity within the industry that is reinforcing problematic gender stereotypes.

    “Obedient and obliging machines that pretend to be women are entering our homes, cars and offices,” Saniye Gulser Corat, Unesco’s director for gender equality, said in a statement. “The world needs to pay much closer attention to how, when and whether A.I. technologies are gendered and, crucially, who is gendering them.”

    One particularly worrying reflection of this is the “deflecting, lackluster or apologetic responses” that these assistants give to insults.

    The report borrows its title — “I’d Blush if I Could” — from a standard response from Siri, the Apple voice assistant, when a user hurled a gendered expletive at it. When a user tells Alexa, “You’re hot,” her typical response has been a cheery, “That’s nice of you to say!”

    Siri’s response was recently altered to a more flattened “I don’t know how to respond to that,” but the report suggests that the technology remains gender biased, arguing that the problem starts with engineering teams that are staffed overwhelmingly by men.

    “Siri’s ‘female’ obsequiousness — and the servility expressed by so many other digital assistants projected as young women — provides a powerful illustration of gender biases coded into technology products,” the report found.

    Amazon’s Alexa, named for the ancient library of Alexandria, is unmistakably female. Microsoft’s Cortana was named after an A.I. character in the Halo video game franchise that projects itself as a sensuous, unclothed woman. Apple’s Siri is a Norse name that means “beautiful woman who leads you to victory.” The Google Assistant system, also known as Google Home, has a gender-neutral name, but the default voice is female.

    Baked into their humanized personalities, though, are generations of problematic perceptions of women. These assistants are putting a stamp on society as they become common in homes across the world, and can influence interactions with real women, the report warns. As the report puts it, “The more that culture teaches people to equate women with assistants, the more real women will be seen as assistants — and penalized for not being assistant-like.”

    #Assistants_vocaux #Genre #Féminisme #IA #Ingtelligence_artificielle #Voix

  • Alexa, why does the brave new world of AI have all the sexism of the old one ?
    https://www.theguardian.com/lifeandstyle/2019/may/22/alexa-why-does-the-brave-new-world-of-ai-have-all-the-sexism-of-the-old

    Virtual assistants such as Google Home and Siri only encourage the attitude that women exist merely to aid men in getting on with more important things. When women are over-represented in the workforce, it tends be in industries of assistance – cleaning, nursing, secretarial work and, now, the world of virtual assistants. Research by Unesco has shown that using default female voices in AI – as Microsoft has done with Cortana, Amazon with Alexa, Google with Google Assistant and Apple with (...)

    #Apple #Google #Microsoft #Amazon #robotique #Home #Assistant #Alexa #Cortana #domotique #Siri #biométrie #discrimination #voix (...)

    ##algorithme
    https://i.guim.co.uk/img/media/44671b648a16095e4077973b446bf932f5c64484/1061_0_2443_1467/master/2443.jpg

  • Swarms of Drones, Piloted by Artificial Intelligence, May Soon Patrol Europe’s Borders
    https://theintercept.com/2019/05/11/drones-artificial-intelligence-europe-roborder

    Imagine you’re hiking through the woods near a border. Suddenly, you hear a mechanical buzzing, like a gigantic bee. Two quadcopters have spotted you and swoop in for a closer look. Antennae on both drones and on a nearby autonomous ground vehicle pick up the radio frequencies coming from the cell phone in your pocket. They send the signals to a central server, which triangulates your exact location and feeds it back to the drones. The robots close in. Cameras and other sensors on the (...)

    #algorithme #robotique #militarisation #aérien #migration #surveillance #frontières #Roborder (...)

    ##drone

  • A photo storage app used customers’ private snaps to train facial recognition AI - The Verge
    https://www.theverge.com/2019/5/10/18564043/photo-storage-app-ever-facial-recognition-secretly-trained-ai


    Vous êtes le produit…

    A photo storage app that offers users “free, unlimited private backup of all your life’s memories” has been secretly using customers’ private snaps to train and sell facial recognition software.

    As detailed in a report from NBC News, the startup Ever launched as a simple cloud storage business in 2013, but pivoted to become a facial recognition technology vendor in 2017 after realizing that a photo app “wasn’t going to be a venture-scale business.”

    Customers, though, were not informed of this change — or how their photographs and videos are now being used.

  • China working on data privacy law but enforcement is a stumbling block | South China Morning Post
    https://www.scmp.com/news/china/politics/article/3008844/china-working-data-privacy-law-enforcement-stumbling-block

    En Chine des scientifiques s’inquiètent de la collection de données sans limites et des abus possibles par le gouvernment et des acteurs privés. Au niveau politique on essaye d’introduire des lois protégeant les données et la vie privée. D’après l’article les véritables problèmes se poseront lors de l’implémentation d’une nouvelle législation en la matière.

    Echo Xie 5 May, 2019 - Biometric data in particular needs to be protected from abuse from the state and businesses, analysts say
    Country is expected to have 626 million surveillance cameras fitted with facial recognition software by 2020

    In what is seen as a major step to protect citizens’ personal information, especially their biometric data, from abuse, China’s legislators are drafting a new law to safeguard data privacy, according to industry observers – but enforcement remains a major concern.

    “China’s private data protection law will be released and implemented soon, because of the fast development of technology, and the huge demand in society,” Zeng Liaoyuan, associate professor at the University of Electronic Science and Technology of China, said in an interview .

    Technology is rapidly changing life in China but relevant regulations had yet to catch up, Zeng said.

    Artificial intelligence and its many applications constitute a major component of China’s national plan. In 2017, the “Next Generation Artificial Intelligence Development Plan” called for the country to become the world leader in AI innovation by 2030.

    Biometrics authentication is used in computer science as an identification or access control. It includes fingerprinting, face recognition, DNA, iris recognition, palm prints and other methods.

    In particular, the use of biometric data has grown exponentially in key areas: scanning users’ fingerprints or face to pay bills, to apply for social security qualification and even to repay loans. But the lack of an overarching law lets companies gain access to vast quantities of an individual’s personal data, a practice that has raised privacy concerns.

    During the “two sessions” last month, National People’s Congress spokesman Zhang Yesui said the authorities had hastened the drafting of a law to protect personal data, but did not say when it would be completed or enacted.

    One important focus, analysts say, is ensuring that the state does not abuse its power when collecting and using private data, considering the mass surveillance systems installed in China.

    “This is a big problem in China,” said Liu Deliang, a law professor at Beijing Normal University. “Because it’s about regulating the government’s abuse of power, so it’s not only a law issue but a constitutional issue.”

    The Chinese government is a major collector and user of privacy data. According to IHS Markit, a London-based market research firm, China had 176 million surveillance cameras in operation in 2016 and the number was set to reach 626 million by 2020.

    In any proposed law, the misuse of data should be clearly defined and even the government should bear legal responsibility for its misuse, Liu said.

    “We can have legislation to prevent the government from misusing private data but the hard thing is how to enforce it.”

    Especially crucial, legal experts say, is privacy protection for biometric data.

    “Compared with other private data, biometrics has its uniqueness. It could post long-term risk and seriousness of consequence,” said Wu Shenkuo, an associate law professor at Beijing Normal University.

    “Therefore, we need to pay more attention to the scope and limitations of collecting and using biometrics.”

    Yi Tong, a lawmaker from Beijing, filed a proposal concerning biometrics legislation at the National People’s Congress session last month.

    “Once private biometric data is leaked, it’s a lifetime leak and it will put the users’ private data security into greater uncertainty, which might lead to a series of risks,” the proposal said.

    Yi suggested clarifying the boundary between state power and private rights, and strengthening the management of companies.

    In terms of governance, Wu said China should specify the qualifications entities must have before they can collect, use and process private biometric data. He also said the law should identify which regulatory agencies would certify companies’ information.

    There was a need to restrict government behaviour when collecting private data, he said, and suggested some form of compensation for those whose data was misused.

    “Private data collection at the government level might involve the need for the public interest,” he said. “In this case, in addition to ensuring the legal procedure, the damage to personal interests should be compensated.”

    Still, data leaks, or overcollecting, is common in China.

    A survey released by the China Consumers Association in August showed that more than 85 per cent of respondents had suffered some sort of data leak, such as their cellphone numbers being sold to spammers or their bank accounts being stolen.

    Another report by the association in November found that of the 100 apps it investigated, 91 had problems with overcollecting private data.

    One of them, MeituPic, an image editing software program, was criticised for collecting too much biometric data.

    The report also cited Ant Financial Services, the operator of the Alipay online payments service, for the way it collects private data, which it said was incompatible with the national standard. Ant Financial is an affiliate of Alibaba Group, which owns the South China Morning Post.

    In January last year, Ant Financial had to apologise publicly for automatically signing up users for a social credit programme without obtaining their consent.

    “When a company asks for a user’s private data, it’s unscrupulous, because we don’t have a law to limit their behaviour,” Zeng said.

    “Also it’s about business competition. Every company wants to hold its customers, and one way is to collect their information as much as possible.”

    Tencent and Alibaba, China’s two largest internet companies, did not respond to requests for comment about the pending legislation.

    #Chine #droit #vie_privée #surveillance #politique