Malicious Attacks to Neural Networks
▻https://hackernoon.com/malicious-attacks-to-neural-networks-8b966793dfe1?source=rss----3a8144ea
Adversarial Examples for Humans — An IntroductionThis article is based on a twenty-minute talk I gave for TrendMicro Philippines Decode Event 2018. It’s about how malicious people can attack deep neural networks. A trained neural network is a model; I’ll be using the terms network (short for neural network) and model interchangeably throughout this article.Deep learning in a nutshellThe basic building block of any neural network is an artificial neuron.Essentially, a neuron takes a bunch of inputs and outputs a value. A neuron gets the weighted sum of the inputs (plus a number called a bias) and feeds it to a non-linear activation function. Then, the function outputs a value that can be used as one of the inputs to other neurons.You can connect neurons in various different (usually (...)
#artificial-intelligence #neural-networks #deep-learning #machine-learning