author
stringlengths 3
31
| claps
stringlengths 1
5
| reading_time
int64 2
31
| link
stringlengths 92
277
| title
stringlengths 24
104
| text
stringlengths 1.35k
44.5k
|
---|---|---|---|---|---|
Milo Spencer-Harper | 2.2K | 3 | https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a?source=tag_archive---------1---------------- | How to build a multi-layered neural network in Python | In my last blog post, thanks to an excellent blog post by Andrew Trask, I learned how to build a neural network for the first time. It was super simple. 9 lines of Python code modelling the behaviour of a single neuron.
But what if we are faced with a more difficult problem? Can you guess what the ‘?’ should be?
The trick is to notice that the third column is irrelevant, but the first two columns exhibit the behaviour of a XOR gate. If either the first column or the second column is 1, then the output is 1. However, if both columns are 0 or both columns are 1, then the output is 0.
So the correct answer is 0.
However, this would be too much for our single neuron to handle. This is considered a “nonlinear pattern” because there is no direct one-to-one relationship between the inputs and the output.
Instead, we must create an additional hidden layer, consisting of four neurons (Layer 1). This layer enables the neural network to think about combinations of inputs.
You can see from the diagram that the output of Layer 1 feeds into Layer 2. It is now possible for the neural network to discover correlations between the output of Layer 1 and the output in the training set. As the neural network learns, it will amplify those correlations by adjusting the weights in both layers.
In fact, image recognition is very similar. There is no direct relationship between pixels and apples. But there is a direct relationship between combinations of pixels and apples.
The process of adding more layers to a neural network, so it can think about combinations, is called “deep learning”. Ok, are we ready for the Python code? First I’ll give you the code and then I’ll explain further.
Also available here: https://github.com/miloharper/multi-layer-neural-network
This code is an adaptation from my previous neural network. So for a more comprehensive explanation, it’s worth looking back at my earlier blog post.
What’s different this time, is that there are multiple layers. When the neural network calculates the error in layer 2, it propagates the error backwards to layer 1, adjusting the weights as it goes. This is called “back propagation”.
Ok, let’s try running it using the Terminal command:
python main.py
You should get a result that looks like this:
First the neural network assigned herself random weights to her synaptic connections, then she trained herself using the training set. Then she considered a new situation [1, 1, 0] that she hadn’t seen before and predicted 0.0078876. The correct answer is 0. So she was pretty close!
You might have noticed that as my neural network has become smarter I’ve inadvertently personified her by using “she” instead of “it”.
That’s pretty cool. But the computer is doing lots of matrix multiplication behind the scenes, which is hard to visualise. In my next blog post, I’ll visually represent our neural network with an animated diagram of her neurons and synaptic connections, so we can see her thinking.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI.
Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
|
Josh | 462 | 9 | https://medium.com/technology-invention-and-more/everything-you-need-to-know-about-artificial-neural-networks-57fac18245a1?source=tag_archive---------3---------------- | Everything You Need to Know About Artificial Neural Networks | The year 2015 was a monumental year in the field of artificial intelligence. Not only are computers learning more and learning faster, but we’re learning more about how to improve their systems. Everything is starting to align, and because of it we’re seeing strides we’ve never thought possible until now. We have programs that can tell stories about pictures. We have cars that are driving themselves. We even have programs that create art. If you want to read more about advancements in 2015, read this article. Here at Josh.ai, with AI technology becoming the core of just about everything we do, we think it’s important to understand some of the common terminology and to get a rough idea of how it all works.
A lot of the advances in artificial intelligence are new statistical models, but the overwhelming majority of the advances are in a technology called artificial neural networks (ANN). If you’ve read anything about them before, you’ll have read that these ANNs are a very rough model of how the human brain is structured. Take note that there is a difference between artificial neural networks and neural networks. Though most people drop the artificial for the sake of brevity, the word artificial was prepended to the phrase so that people in computational neurobiology could still use the term neural network to refer to their work. Below is a diagram of actual neurons and synapses in the brain compared to artificial ones.
Fear not if the diagram doesn’t come through very clearly. What’s important to understand here is that in our ANNs we have these units of calculation called neurons. These artificial neurons are connected by synapses which are really just weighted values. What this means is that given a number, a neuron will perform some sort of calculation (for example the sigmoid function), and then the result of this calculation will be multiplied by a weight as it “travels.” The weighted result can sometimes be the output of your neural network, or as I’ll talk about soon, you can have more neurons configured in layers, which is the basic concept to an idea that we call deep learning.
Artificial neural networks are not a new concept. In fact, we didn’t even always call them neural networks and they certainly don’t look the same now as they did at their inception. Back during the 1960s we had what was called a perceptron. Perceptrons were made of McCulloch-Pitts neurons. We even had biased perceptrons, and ultimately people started creating multilayer perceptrons, which is synonymous with the general artificial neural network we hear about now.
But wait, if we’ve had neural networks since the 1960s, why are they just now getting huge? It’s a long story, and I encourage you to listen to this podcast episode to listen to the “fathers” of modern ANNs talk about their perspective of the topic. To quickly summarize, there’s a hand full of factors that kept ANNs from becoming more popular. We didn’t have the computer processing power and we didn’t have the data to train them. Using them was frowned upon due to them having a seemingly arbitrary ability to perform well. Each one of these factors is changing. Our computers are getting faster and more powerful, and with the internet, we have all kinds of data being shared for use.
You see, I mentioned above that the neurons and synapses perform calculations. The question on your mind should be: “How do they learn what calculations to perform?” Was I right? The answer is that we need to essentially ask them a large amount of questions, and provide them with answers. This is a field called supervised learning. With enough examples of question-answer pairs, the calculations and values stored at each neuron and synapse are slowly adjusted. Usually this is through a process called backpropagation.
Imagine you’re walking down a sidewalk and you see a lamp post. You’ve never seen a lamp post before, so you walk right into it and say “ouch.” The next time you see a lamp post you scoot a few inches to the side and keep walking. This time your shoulder hits the lamp post and again you say “ouch.” The third time you see a lamp post, you move all the way over to ensure you don’t hit the lamp post. Except now something terrible has happened — now you’ve walked directly into the path of a mailbox, and you’ve never seen a mailbox before. You walk into it and the whole process happens again. Obviously, this is an oversimplification, but it is effectively what backpropogation does. An artificial neural network is given a multitude of examples and then it tries to get the same answer as the example given. When it is wrong, an error is calculated and the values at each neuron and synapse are propagated backwards through the ANN for the next time. This process takes a LOT of examples. For real world applications, the number of examples can be in the millions.
Now that we have an understanding of artificial neural networks and somewhat of an understanding in how they work, there’s another question that should be on your mind. How do we know how many neurons we need to use? And why did you bold the word layers earlier? Layers are just sets of neurons. We have an input layer which is the data we provide to the ANN. We have the hidden layers, which is where the magic happens. Lastly, we have the output layer, which is where the finished computations of the network are placed for us to use.
Layers themselves are just sets of neurons. In the early days of multilayer perceptrons, we originally thought that having just one input layer, one hidden layer, and one output layer was sufficient. It makes sense, right? Given some numbers, you just need one set of computations, and then you get an output. If your ANN wasn’t calculating the correct value, you just added more neurons to the single hidden layer. Eventually, we learned that in doing this we were really just creating a linear mapping from each input to the output. In other words, we learned that a certain input would always map to a certain output. We had no flexibility and really could only handle inputs we’d seen before. This was by no means what we wanted.
Now introduce deep learning, which is when we have more than one hidden layer. This is one of the reasons we have better ANNs now, because we need hundreds of nodes with tens if not more layers. This leads to a massive amount of variables that we need to keep track of at a time. Advances in parallel programming also allow us to run even larger ANNs in batches. Our artificial neural networks are now getting so large that we can no longer run a single epoch, which is an iteration through the entire network, at once. We need to do everything in batches which are just subsets of the entire network, and once we complete an entire epoch, then we apply the backpropagation.
Along with now using deep learning, it’s important to know that there are a multitude of different architectures of artificial neural networks. The typical ANN is setup in a way where each neuron is connected to every other neuron in the next layer. These are specifically called feed forward artificial neural networks (even though ANNs are generally all feed forward). We’ve learned that by connecting neurons to other neurons in certain patterns, we can get even better results in specific scenarios.
Recurrent Neural Networks (RNN) were created to address the flaw in artificial neural networks that didn’t make decisions based on previous knowledge. A typical ANN had learned to make decisions based on context in training, but once it was making decisions for use, the decisions were made independent of each other.
When would we want something like this? Well, think about playing a game of Blackjack. If you were given a 4 and a 5 to start, you know that 2 low cards are out of the deck. Information like this could help you determine whether or not you should hit. RNNs are very useful in natural language processing since prior words or characters are useful in understanding the context of another word. There are plenty of different implementations, but the intention is always the same. We want to retain information. We can achieve this through having bi-directional RNNs, or we can implement a recurrent hidden layer that gets modified with each feedforward. If you want to learn more about RNNs, check out either this tutorial where you implement an RNN in Python or this blog post where uses for an RNN are more thoroughly explained.
An honorable mention goes to Memory Networks. The concept is that we need to retain more information than what an RNN or LSTM keeps if we want to understand something like a movie or book where a lot of events might occur that build on each other.
Convolutional Neural Networks (CNN), sometimes called LeNets (named after Yann LeCun), are artificial neural networks where the connections between layers appear to be somewhat arbitrary. However, the reason for the synapses to be setup the way they are is to help reduce the number of parameters that need to be optimized. This is done by noting a certain symmetry in how the neurons are connected, and so you can essentially “re-use” neurons to have identical copies without necessarily needing the same number of synapses. CNNs are commonly used in working with images thanks to their ability to recognize patterns in surrounding pixels. There’s redundant information contained when you look at each individual pixel compared to its surrounding pixels, and you can actually compress some of this information thanks to their symmetrical properties. Sounds like the perfect situation for a CNN if you ask me. Christopher Olah has a great blog post about understanding CNNs as well as other types of ANNs which you can find here. Another great resource for understanding CNNs is this blog post.
The last ANN type that I’m going to talk about is the type called Reinforcement Learning. Reinforcement Learning is a generic term used for the behavior that computers exhibit when trying to maximize a certain reward, which means that it in itself isn’t an artificial neural network architecture. However, you can apply reinforcement learning or genetic algorithms to build an artificial neural network architecture that you might not have thought to use before. A great example and explanation can be found in this video, where YouTube user SethBling creates a reinforcement learning system that builds an artificial neural network architecture that plays a Mario game entirely on its own. Another successful example of reinforcement learning can be seen in this video where the company DeepMind was able to teach a program to master various Atari games.
Now you should have a basic understanding of what’s going on with the state of the art work in artificial intelligence. Neural networks are powering just about everything we do, including language translation, animal recognition, picture captioning, text summarization and just about anything else you can think of. You’re sure to hear more about them in the future so it’s good that you understand them now!
This post was written by Aaron at Josh.ai. Previously, Aaron worked at Northrop Grumman before joining the Josh team where he works on natural language programming (NLP) and artificial intelligence (AI). Aaron is a skilled YoYo expert, loves video games and music, has been programming since middle school and recently turned 21.
Josh.ai is an AI agent for your home. If you’re interested in following Josh and getting early access to the beta, enter your email at https://josh.ai.
Like Josh on Facebook — http://facebook.com/joshdotai
Follow Josh on Twitter — http://twitter.com/joshdotai
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Technology trends and New Invention? Follow this collection to update the latest trend! [UPDATE] As a collection editor, I don’t have any permission to add your articles in the wild. Please submit your article and I will approve. Also, follow this collection, please.
|
Milo Spencer-Harper | 317 | 6 | https://medium.com/deep-learning-101/how-to-create-a-mind-the-secret-of-human-thought-revealed-6211bbdb092a?source=tag_archive---------4---------------- | How to create a mind: The secret of human thought revealed | In my quest to learn about AI, I read ‘How to create a mind: The secret of human thought revealed’ by Ray Kurzweil. It was incredibly exciting and I’m going to share what I’ve learned.
If I was going to summarise the book in one sentence, I could do no better than Kurzweil’s own words:
Kurzweil argues convincingly that it is both possible and desirable. He goes on to suggest that the algorithm may be simpler than we would expect and that it will be based on the Pattern Recognition Theory of the Mind (PRTM).
The human brain is the most incredible thing in the known universe. A three-pound object, it can discover relativity, imagine the universe, create music, build the Taj Mahal and write a book about the brain.
However, it also has limitations and this gives us clues as to how it works. Recite the alphabet. Ok. Good. Now recite it backwards. The former was easy, the latter likely impossible. Yet, a computer finds it trivial to reverse a list. This tells us that the human brain can only retrieve information sequentially. Studies have also revealed that when thinking about something, we can only hold around four high level concepts in our brain at a time. That’s why we use tools, such as pen and paper to solve a maths problem, to help us think.
So how does the human brain work? Mammals actually have two brains. The old reptilian brain, called the amygdala and the conscious part, called the neocortex. The amygdala is pre-programmed through evolution to seek pleasure and avoid pain. We call this instinct. But what distinguishes mammals from other animals, is that we have also evolved to have a neocortex. Our neocortex rationalises the world around us and makes predictions. It allows us to learn. The two brains are tightly bound and work together. However when reading the book, I wondered if these two brains might also be in conflict. It would explain why the idea of internal struggle is present throughout literature and religion: good vs. evil, social conformity vs. hedonism.
What’s slightly more alarming is we may have more minds than that. Our brain is divided into two hemispheres, left and right. Studies of split-brain patients, where the connection between them has been severed, shows that these patients are not necessarily aware that the other mind exists. If one mind moves the right-hand, the other mind will post-rationalise this decision by creating a false memory (a process known as confabulation). This has implications for us all. We may not have the free will which we perceive to have. Our conscious part of the brain, may simply be creating explanations for what the unconscious parts have already done.
So how does the neocortex work? We know that it consists of around 30 billion cells, which we call neurons. These neurons are connected together and transmit information using electrical impulses. If the sum of the electrical pulses across multiple inputs to a neuron exceeds a certain threshold, that neuron fires causing the next neuron in the chain to fire, and this goes on continuously. We call these processes thoughts. At first, scientists thought this neural network was such a complicated and tangled web, that it would be impossible to ever understand.
However, Kurzweil uses the example of the Einstein’s famous equation E = mc^2 to demonstrate that sometimes the solutions to complex problems are surprisingly simple. There are many examples in science, from Newtonian mechanics to thermodynamics, which show that moving up a level of abstraction dramatically simplifies modelling complex systems.
Recent innovations in brain imaging techniques have revealed that the neocortex contains modules, each consisting of around 100 neurons, repeating over and over again. There are around 300 million of these modules arranged in a grid. So if we could discover the equations which model this module, repeat it on a computer 300 million times and expose it to sensory input, we could create an intelligent being. But what do these modules do?
Kurzweil, who has spent decades researching AI, proposes that these modules are pattern recognisers. When reading this page, one pattern recogniser might be responsible for detecting a horizontal stroke. This module links upward to a module responsible for the letter ‘A’, and if the other relevant stroke modules light up, the ‘A’ module also lights up. The modules ‘A’ , ‘p’, ‘p’ and ‘l’ link to the ‘Apple’ module, which in turn is linked to higher level pattern recognisers, such as thoughts about apples. You don’t actually need to see the ‘e’ because the ‘Apple’ pattern recogniser fires downward, telling the one responsible for the letter ‘e’ that there is a high probability of seeing one. Conversely, inhibitory signals suppress pattern recognisers from firing if a higher level pattern recogniser has detected such an event is unlikely, given the context. We literally see what we expect to see. Kurzweil calls this the ‘Pattern Recogniser Theory of the Mind (PRTM)’. Although it is hard for us to imagine, all of our thoughts and decisions, can be explained by huge numbers of these pattern recognisers hooked together.
We organise these thoughts to explain the world in a hierarchal fashion and use words to give meaning to these modules. The world is naturally hierarchal and the brain mirrors this. Leaves are on trees, trees make up a forest, and a forest covers a mountain. Language is closely related to our thoughts, because language directly evolved from and mirrors our brain. This helps to explain why different languages follow remarkably similar structures. It explains why we think using our native language. We use language not only to express ideas to others, but to express ideas within our own mind.
What’s interesting, is that when AI researchers have worked independently of neuroscientists, their most successful methods turned out to be equivalent to the human brain’s methods. Thus, the human brain offers us clues for how to create an intelligent nonbiological entity.
If we work out the algorithm for a single pattern recogniser, we can repeat it on a computer, creating a neural network. Kurzweil argues that these neural networks could become conscious, like a human mind. Free from biological constraints and benefiting from the exponential growth in computing power, these entities could create even smarter entities, and surpass us in intelligence (this prediction is called technological singularity). I’ll discuss the ethical and social considerations in a future blog post, but for now let’s assume it is desirable.
The question then becomes, what is the algorithm for a single pattern recogniser? Kurzweil recommends using a mathematical technique called hierarchal hidden Markov models, named after the Russian mathematician Andrey Markov (1856–1922). However, this technique is too technical to be properly explained in Kurzweil’s book.
So my next two goals are:
(1) To learn as much as I can about hierarchal hidden Markov models.
(2) To build a simple neural network written in Python from scratch which can be trained to complete a simple task.
In my next blog post, I learn how to build a neural network in 9 lines of Python code.
Note: Submissions do not necessarily represent the views of the editors.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI.
Fundamentals and Latest Developments in #DeepLearning
|
Karl N. | 10 | 7 | https://gab41.lab41.org/taking-keras-to-the-zoo-9a76243152cb?source=tag_archive---------5---------------- | Taking Keras to the Zoo – Gab41 | If you follow any of the popular blogs like Google’s research, FastML, Smola’s Adventures in Data Land, or one of the indie-pop ones like Edwin Chen’s blog, you’ve probably also used ModelZoo. Actually, if you’re like our boss, you affectionately call it “The Zoo”. (Actually x 2, if you have interesting blogs that you read, feel free to let us know!)
Unfortunately, ModelZoo is only supported in Caffe. Fortunately, we’ve taken a look at the difference between the kernels in Keras, Theano, and Caffe for you, and after reading this blog, you’ll be able to load models from ModelZoo into any of your favorite Python tools.
Why this post? Why not just download our Github code?
In short, it’s better you figure out how these things work before you use them. That way, you’re better armed to use the latest TensorFlow and Neon toolboxes if you’re prototyping and transitioning your code to Caffe.
So, there’s Hinton’s Dropout and then there’s Caffe’s Dropout...and they’re different. You might be wondering, “What’s the big deal?” Well sir, I have a name of a guy for you, and it’s Willy...Mr. Willy Nilly. One thing Willy Nilly likes is the number 4096. Another thing he likes is to introduce regularization (which includes Dropout) arbitrarily, and Bayesian theorists aren’t a fan. Those people try to fit their work into the probabilistic framework, and they’re trying to hold onto what semblance of theoretical bounds exist for neural networks. However, for you as a practitioner, understanding who’s doing what will save you hours of debugging code.
We singled out Dropout because the way people have implemented it spans the gamut. There’s actually some history as to this variation, but no one really cared, because optimizing for it has almost universally produced similar results. Much of the discussion stems from how the chain rule is implemented since randomly throwing stuff away is apparently not really a differentiable operation. Passing gradients back (i.e., backpropagation) is a fun thing to do; there’s a “technically right” way to do it, and then there’s what’s works.
Back to ModelZoo, where we’d recommend you note the only sentence of any substance in this section, and the sentence is as follows. While Keras and perhaps other packages multiply the gradients by the retention probability at inference time, Caffe does not. That is to say, if you have a dropout level of 0.2, your retention probability is 0.8, and at inference time, Keras will scale the output of your prediction by 0.8. So, download the ModelZoo *.caffemodels, but know that deploying them on Caffe will produce non-scaled results, whereas Keras will.
Hinton explains the reason why you need to scale, and the intuition is as follows. If you’ve only got a portion of your signal seeping through to the next layer during training, you should scale the expectation of what the energy of your final result should be. Seems like a weird thing to care about, right? The argument that minimizes x is still the same as the argument that minimizes 2x. This turns out to be a problem when you’re passing multiple gradients back and don’t implement your layers uniformly. Caffe works in instances like Siamese Networks or Bilinear Networks, but should you scale your networks on two sides differently, don’t be surprised if you’re getting unexpected results.
What does this look like in Francois’s code? Look at the “Dropout” code on Github, or in your installation folder under keras/layers/core.py. If you want to make your own layer for loading in the Dropout module, just comment out the part of the code that does this scaling:
You can modify the original code, or you can create your own custom layer. (We’ve opted to keep our installation of Keras clean and just implemented a new class that extended MaskedLayer.) BTW, you should be careful in your use of Dropout. Our experience with them is that they regularize okay, but could contribute to vanishing gradients really quickly.
Everyday except for Sunday and some holidays, a select few machine learning professors and some signal processing leaders meet in an undisclosed location in the early hours of the morning. The topic of their discussion is almost universally, “How do we get researchers and deep learning practitioners to code bugs into their programs?” One of the conclusions a while back was that the definition of convolution and dense matrix multiplication (or cross-correlation) should be exactly opposite of each other. That way, when people are building algorithms that call themselves “Convolutional Neural Networks”, no one will know which implementation is actually being used for the convolution portion itself.
For those who don’t know, convolutions and sweeping matrix multiplication across an array of data, differ in that convolutions will be flipped before being slid across the array. From Wikipedia, the definition is:
On the other hand, if you’re sweeping matrix multiplications across the array of data, you’re essentially doing cross-correlation, which on Wikipedia, looks like:
Like we said, the only difference is that darned minus/plus sign, which caused us some headache.
We happen to know that Theano and Caffe follow different philosophies. Once again, Caffe doesn’t bother with pleasantries and straight up codes efficient matrix multiplies. To load models from ModelZoo into either Keras and Theano will require the transformation because they strictly follow the definition of convolution. The easy fix is to flip it yourself when you’re loading the weights into your model. For 2D convolution, this looks like:
weights=weights[:,:,::-1,::-1]
Here, the variable “weights” will be inserted into your model’s parameters. You can set weights by indexing into the model. For example, say I want to set the 9th layer’s weights. I would type:
model.layers[9].set_weights(weights)
Incidentally, and this is important, when loading any *.caffemodel into Python, you may have to transpose it in order to use it. You can quickly find this out by loading it if you get an error, but we thought it worth noting.
Alright, alright, we know what you’re really here for; just getting the code and running with it. So, we’ve got some example code that classifies using Keras and the VGG net from the web at our Git (see the link below). But, let’s go through it just a bit. Here’s a step by step account of what you need to do to use the VGG caffe model.
And now you have the basics! Go ahead and take a look at our Github for some goodies. Let us know!
Originally published at www.lab41.org on December 13, 2015.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Gab41 is Lab41’s blog exploring data science, machine learning, and artificial intelligence. Geek out with us!
|
Milo Spencer-Harper | 42 | 3 | https://medium.com/@miloharper/thanks-so-much-for-your-response-jared-really-glad-to-hear-you-enjoyed-reading-it-9d73caa469ff?source=tag_archive---------6---------------- | Thanks so much for your response Jared. Really glad you enjoyed reading it. | Thanks so much for your response Jared. Really glad you enjoyed reading it.
Could you go into more detail about finding the error on layer 1?
That’s a really great question! I’ve changed this response quite a bit as I wrote it, because your question helped me improve my own understanding. It sounds like you know quite a lot about neural networks already, however I’m going to explain everything fully for readers who are new to the field. In the article you read, I modelled the neural network using matrices (grids of numbers). That’s the most common method as it is computationally faster and mathematically equivalent, but it hides a lot of the details. For example, line 15 calculates the error in layer 1, but it is hard to visualise what it is doing.
To help me learn, I’ve re-written that same code by modelling the layers, neurons and synapses explicitly and have created a video of the neural network learning. I’m going to use this new version of my code to answer your question.
For clarity, I’ll describe how I’m going to refer to the layers. The three input neurons are layer 0, the four neurons in the hidden layer are layer 1 and the single output neuron is layer 2. In my code, I chose to associate the synapses with the neuron they flow into.
How do I find the error in layer 1? First I calculate the error of the output neuron (layer 2), which is the difference between its output and the output in the training set example. Then I work my way backwards through the neural network. So I look at the incoming synapses into layer 2, and estimate how much each of the neurons in layer 1 were responsible for the error. This is called back propagation.
In my new version of the code, the neural network is represented by a class called NeuralNetwork, and it has a method called train(), which is shown below. You can see me calculating the error of the ouput neuron (lines 3 and 4). Then I work backwards through the layers (line 5).
Next, I cycle through all the neurons in a layer (line 6) and call each individual neuron’s train() method (line 7).
But what does the neuron’s train() method do? Here it is:
You can see that I cycle through every incoming synapse into the neuron. The two key things to note are:
Let’s consider Line 4 even more carefully, since this is the line which answers your question directly. For each neuron in layer 1, its error is equal to the error in the output neuron (layer 2), multiplied by the weight of its synapse into the output neuron, multiplied by the sensitivity of the output neuron to input.
The sensitivity of a neuron to input, is described by the gradient of its output function. Since I used the Sigmoid curve as my ouput function, the gradient is the derivative of the Sigmoid curve. As well as using the gradient to calculate the errors, I also used the gradient to adjust the weights, so this method of learning is called gradient descent.
If you look back at my old code, which uses matrices you can see that it is mathematically equivalent (unless I made a mistake). With the matrices method, I calculated the error for all the neurons in layer 1 simultaneously. With the new code, I iterated through each neuron separately.
I hope that helps answer your question.
Also, I’m curious if there is any theory or rule of thumb on how many hidden layers and how many neurons in each layer should be used to solve a problem.
Another good question! I’m not sure. I’m pretty new to neural networks. I only started learning about them recently.
I did read a book by the AI researcher Ray Kurzweil, which said that an evolutionary approach works better than consulting experts, when selecting the overall parameters for a neural network. Those neural networks which learned the best, would be selected, he would make random mutations to the parameters, and then pit the offspring against one another.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Studied Economics at Oxford University. Founder of www.moju.io. Interested in politics and AI.
|
Nikolai Savas | 50 | 10 | https://medium.com/@savas/craig-using-neural-networks-to-learn-mario-a76036b639ad?source=tag_archive---------7---------------- | CrAIg: Using Neural Networks to learn Mario – Nikolai Savas – Medium | Joe Crozier and I recently came back from YHack, a 36-hour, 1500 person hackathon held by Yale University. This is our second year in a row attending, and for the second time we managed to place in the top 8!
Our project, named “crAIg”, is a self-teaching algorithm that learns to play Super Mario Bros. for the Nintendo Entertainment System (NES). It begins with absolutely no knowledge of what Mario is, how to play it, or what winning looks like, and using neuroevolution, slowly builds up a foundation for itself to be able to progress in the game.
My focus on this project was the gritty details of the implementation of crAIg’s evolution algorithm, so I figured I’d make a relatively indepth blog post about it.
crAIg’s evolution is based on a paper titled Evolving Neural Networks through Augmented Topologies, specifically an algorithm titled “NEAT”. The rest of the blog post is going to cover my implementation of it, hopefully in relatively layman’s terms.
Before we jump right into the algorithm, I’m going to lay a foundation for the makeup of crAIg’s brain. His “brain” at any given point playing the game is made up of a collection of “neurons” and “synapses”, alternatively titled nodes and connections/links. Essentially, his brain is a directed graph.
Above is the second part of this project, a Node.js server that displays the current state of crAIg’s brain, or what he is “thinking”. Let’s go through it quickly to understand what it’s representing.
On the left you see a big grid of squares. This is what the game looks like right now, or what crAIg can “see”. He doesn’t know what any of the squares mean, but he knows that an “air” tile is different from a “ground” tile in some way. Each of the squares is actually an input neuron.
On the right side you can see the 4 “output neurons”, or the buttons that crAIg can press. You can also see a line going from one of the black squares on the left grid to the “R” neuron, labelled “1”. This is a synapse, and when the input neuron fires on the left, it will send a signal down the synapse and tell crAIg to press the “R” button. In this way, crAIg walks right. As crAIg evolves, more neurons and synapses are created until his brain might look something more like this:
In this one I’ll just point out a couple things. First of all, the green square on the left is a goomba. Second, you can see another neuron at the very bottom (labelled 176). This is called a hidden neuron, and represents a neuron that is neither input nor output. They appear in crAIg’s brain for added complexity as he evolves. You can also see that at his time of death (Mario just died to a goomba), he was trying to press the “R” and “B” buttons.
While learning Mario is a neat application of neural networks and neuroevolution, it serves mostly as a means to demonstrate the power of these self-evolving neural networks. In reality, the applications for neural networks is endless. While crAIg only learned how to play a simple NES game, the exact same algorithm that was implemented could also be applied to a robot that cleans your house, works in a factory, or even paints beautiful paintings.
crAIg is a cool peek into the future where machines no longer need to be programmed to complete specific tasks, but are instead given guidelines and can teach themselves and learn from experience. As the tasks we expect machines to complete become more and more complex, it becomes less possible to “hard code” their tasks in. We need more versatile machines to work for us, and evolving neural networks are a step in that direction.
If you’re curious about some history behind the problems encountered by neuroevolution, I highly recommend reading the paper that this algorithm is based off. The first section of the paper covers many different approaches to neuroevolution and their benefits.
NEAT is a genetic algorithm that puts every iteration of crAIg’s brain to the test and then selectively breeds them in a very similar way to the evolution of species in nature. The hierarchy is as follows:
Synapse/Neuron: Building blocks of crAIg’s brain.
Genome: An iteration of crAIg’s brain. Essentially a collection of neurons and synapses.
Species: A collection of Genomes.
Generation: An iteration of the NEAT algorithm. This is repeated over and over to evolve crAIg.
The first step every generation is to calculate the fitness of every individual genome from the previous generation. This involves running the same function on each genome so that NEAT knows how successful each one is. For crAIg, this means running through a Mario level using a particular genome, or “brain”. After running through the level, we determine the “fitness” of the genome by this function:
Once the fitness of every genome has been calculated, we can move on to the next portion of the algorithm.
This part of the algorithm is probably the least intuitive. The reason for this “adjusted fitness” is to discourage species from growing too big. As the population in a species goes up, their “adjusted fitness” goes down, forcing the genetic algorithm to diversify.
The proper implementation of this algorithm is relatively intensive, so for crAIg’s implementation we simplified it to the following:
The important part here is that each genome now has an adjusted fitness value associated with it.
Here’s where the natural selection part comes in! The “Survival of the fittest” portion is all about determining how many genomes survive another generation, as well as how many offspring will be born in the species. The algorithms used here aren’t outlined directly in the paper, so most of these algorithms were created through trial and error.
The first step is to determine how many off a species will die to make room for more babies. This is done proportionally to a species’ adjusted fitness: the higher the adjusted fitness, the more die off to make room for babies.
The second step is to determine how many children should be born in the species. This is also proportional to the adjusted fitness of the species.
By the end of these two functions, the species will have a certain number of genomes left as well as a “baby quota” — the difference between the number of genomes and the populationSize.
This algorithm is necessary to allow for species to be left behind. Sometimes a species will go down the completely wrong path, and there’s no point in keeping them around. This algorithm works in a very simple way: If a species is in the bottom __% of the entire generation, it is marked for extinction. If a species is marked for extinction __ times in a row, then all genomes in the species are killed off.
Now comes the fun genetics part! Each species should have a certain number of genomes as well as a certain number of allotted spots for new offspring. Those spots now need to be populated.
Each empty population spot needs to be filled, but can be filled through either “asexual” or “sexual” reproduction. In other words, offspring can result from either two genomes in the species being merged or from a mutation of a single genome in the species. Before I discuss the process of “merging” two genomes, I’ll first discuss mutations.
There are three kinds of mutations that can happen to a genome in NEAT. They are as follows:
This involves a re-distribution of all synapse weights in a genome. They can be either completely re-distributed or simply “perturbed”, meaning changed slightly.
2. Mutate Add Synapse
Adding a synapse means finding two previously unconnected nodes and connecting them with a synapse. This new synapse is given a random weight.
3. Mutate Add Node
This is the trickiest of the mutations. When adding a node, you need to split an already existing synapse into two synapses and add a node in between them. The weight of the original synapse is copied on to the second synapse, while the first synapse is given a weight of 1. One important fact to note is that the first synapse (bright red in the above picture) is not actually deleted, but merely “disabled”. This means that it exists in the genome, but it is marked as inactive.
Synapses added in either Mutate Add Node or Mutate Add Synapse are given a unique “id” called a “historical marking”, that is used in the crossover (mating) algorithm.
When two genomes “mate” to produce an offspring, there is an algorithm detailed in the NEAT paper that must be followed. The intuition behind it is to match up common ancestor synapses (remember we’ve been keeping their “historical marking”s), then take the mutations that don’t match up and mix and match them to create the child. Once a child has been created in this way, it undergoes the mutation process outlined above. I won’t go into too much detail on this algorithm but if you’re curious about it you can find a more detailed explanation of it in section 3.2 of the original paper, or you can see the code I used to implement it here.
Once all the babies have been created in every species, we can finally progress to the final stage of the genetic algorithm: Respeciation. Essentially, we first select a “candidate genome” from each species. This genome is now the representative for the species. All genomes that are not selected as candidates are put into a generic pool and re-organized. The re-organization relies on an equation called the “compatibility distance equation”.
This equation determines how similar (or different) any two given genomes are. I won’t go into the gritty details of how the equation works, as it is well explain in section 3.3 of the original paper, as well as here in crAIg’s code.
If a genome is too different from any of the candidate genomes, it is placed in its own species. Using this process, all of the genomes in the generic pool are re-placed into species.
Once this process has completed, the generation is done, and we are ready to re-calculate the fitness of each of the genomes.
While creating crAIg meant getting very little sleep at YHack, it was well worth it for a couple reasons.
First of all, the NEAT algorithm is a very complex one. Learning how to implement a complex algorithm without losing myself in its complexity was an exercise in code cleanliness, despite being pressed for time because of the hackathon.
It was also very interesting to create an algorithm that is mostly based off a paper as opposed to one that I have example code to work with. Often this meant carefully looking into the wording used in the paper to determine whether I should be using a > or a >=, for example.
One of the most difficult parts of this project was that I was unable to test as I was programming. I essentially wrote all of the code blind and then was able to test and debug it once it had all been created. This was for a couple reasons, partially because of the time constraints of a hackathon, and partially because the algorithm as a whole has a lot of interlocking parts, meaning they needed to be in a working state to be able to see if the algorithm worked.
Overall I’m happy and proud by how Joe and I were able to deal with the stress of creating such a deep and complex project from scratch in a short 36 hour period. Not only did we enjoy ourselves and place well, but we also managed to teach crAIg some cool skills, like jumping over the second pipe in Level 1:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
http://savas.ca/ — niko@savas.ca
|
Dr Ben Medlock | 32 | 4 | https://medium.com/@Ben_Medlock/why-turing-s-legacy-demands-a-smarter-keyboard-9e7324463306?source=tag_archive---------8---------------- | Why Turing’s legacy demands a smarter keyboard – Dr Ben Medlock – Medium | Why Turing’s legacy demands a smarter keyboard
When you start a company, you dream of walking in the footsteps of your heroes. For those working in artificial intelligence, the British computer scientist and father of the field Alan Turing always comes to mind. I thought of him when I did my PhD, when I co-founded an AI keyboard company in 2009, and when we pasted his name on a meeting room door in our first real office.
As a British tech company, today is a big day for SwiftKey. We’ve introduced some of the principles originally conceived of by Turing — artificial neural networks — into our smartphone keyboard for the first time. I want to explain how we managed to do it and how a technology like this, something you may never have heard of before, will help define the smartphone experience of the future. This is my personal take; for the official version check out the SwiftKey blog.
Frustration-free typing on a smartphone relies on complex software to automatically fix typos and predict the words you might want to use. SwiftKey has been at the forefront of this area since 2009, and today our software is used across the world on more than half a billion handsets.
Soon after we launched the first version of our app in 2010, I started to think about using neural networks to power smartphone typing rather than the more traditional n-gram approach (a sophisticated form of word frequency counting). At the time it seemed little more than theoretical, as mobile hardware wasn’t up to the task. However, three years later, the situation began to look more favorable, and in late 2013, our team started working on the idea in earnest.
In order to build a neural network-powered SwiftKey, our engineers were tasked with the enormous challenge of coming up with a solution that would run locally on a smartphone without any perceptible lag. Neural network language models are typically deployed on large servers, requiring huge computational resources. Getting the tech to fit into a handheld mobile device would be no small feat.
After many months of trial, error and lots of experimentation, the team realized they might have found an answer with a combination of two approaches. The first was to make use of the graphical processing unit (GPU) on the phone (utilizing the powerful hardware acceleration designed for rendering complex graphical images) but thanks to some clever programming, they were also able to run the same code on the standard processing unit when the GPU wasn’t available. This combo turned out to be the winning ticket.
So, back to Turing. In 1948 he published a little-known essay called Intelligent Machinery in which he outlined two forms of computing he felt could ultimately lead to machines exhibiting intelligent behavior. The first was a variant of his highly influential “universal Turing machine”, destined to become the foundation for hardware design in all modern digital computers. The second was an idea he called an “unorganized machine”, a type of computer that would use a network of “artificial neurons” to accept inputs and translate them into predicted outputs.
Connecting together many small computing units, each with the ability to receive, modify and pass on basic signals, is inspired by the structure of the human brain. That’s why the appropriation of this concept in software form is called an “artificial neural network”, or a “neural network” for short. The idea is that a collection of artificial neurons are connected together in a specific way (called a “topology”) such that a given set of inputs (what you’ve just typed, for example) can be turned into a useful output (e.g. your most likely next word). The network is then “trained” on millions, or even billions, of data samples and the behavior of the individual neurons is automatically tweaked to achieve the desired overall results.
In the last few years, neural network approaches have facilitated great progress on tough problems such as image recognition and speech processing. Researchers have also begun to demonstrate advances in non-traditional tasks such as automatically generating whole sentence descriptions of images. Such techniques will allow us to better manage the explosion of uncategorized visual data on the web, and will lead to smarter search engines and aids for the visually impaired, among a host of other applications.
The fact that the human brain is so adept at working with language suggests that neural networks, inspired by the brain’s internal structure, are a good bet for the future of smartphone typing. In principle, neural networks also allow us to integrate powerful contextual cues to improve accuracy, for instance a user’s current location and the time of day. These will be stepping stones to more efficient and personal device interactions — the keyboard of the future will provide an experience that feels less like typing and more like working with a close friend or personal assistant.
Applying neural networks to real world problems is part of a wider technology movement that’s changing the face of consumer electronics for good. Devices are getting smarter, more useful and more personal. My goal is that SwiftKey contributes to this revolution. We should all be spending less time fixing typos and more time saying what we mean, when it matters. It’s the legacy we owe to Turing.
The photograph “Alan Turing” by joncallas is licensed under CC BY 2.0.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Technopreneur, @SwiftKey co-founder
|
Nieves Ábalos | 18 | 7 | https://labs.beeva.com/sem%C3%A1ntica-desde-informaci%C3%B3n-desestructurada-90ce87736812?source=tag_archive---------9---------------- | Semántica desde información desestructurada – BEEVA Labs | Detectar patrones es un núcleo importante en el mundo del Procesamiento del Lenguaje Natural. Esta detección de patrones nos permite clasificar documentos, lo que tiene muchas aplicaciones: análisis de sentimiento (sentiment analysis),recuperación de documentos (document retrieval), búsqueda web, filtrado de spam, etc. Esta clasificación se hace de manera automática de forma supervisada o no supervisada (también conocida como clustering de documentos).
Entre las técnicas más clásicas y utilizadas (generalmente supervisadas) encontramos clasificadores Naive Bayes, árboles de decisión (ID3 o C4.5), tf-idf, Latent semantic indexing (LSI) y Support vector machines (SVM). Algunas técnicas utilizadas para extraer características suelen inspirarse en cómo el ser humano es capaz de aprender de información simple y llegar a información más compleja. Se pueden diferenciar entre redes neuronales (algunas topologías de redes neuronales se engloban dentro del concepto ‘deep learning’) y técnicas que no usan estas redes para reconocer patrones.
En BEEVA nos hemos encontrado varias veces con un mismo problema: ¿cómo sabemos si dos documentos son semejantes? (y con “semejantes” queremos decir que tratan de lo mismo). Esto, entre otras cosas, nos permitiría categorizar documentos dentro del mismo tema de manera automática. Así que, a priori, nos encontramos con dos retos:
Necesitamos representar los documentos de manera que los algoritmos que usemos los puedan entender. Normalmente, estas representaciones o modelos están basadas en matrices de características que posee cada documento. Para representar textos, podemos usar técnicas de representación de manera local o de manera continua. La representación local es aquella en la que solo tenemos en cuenta las palabras de forma aislada y se representa como un conjunto de términos índice o palabras clave (n-gramas, bag-of-words...). Este tipo de representación no tiene en cuenta la relación entre términos. La representacióncontinua es aquella en la que sí se tiene en cuenta el contexto de las palabras y la relación entre ellas y se representan como matrices, vectores, conjuntos e incluso nodos (LSA o LSI, LDS, LDA, representaciones distribuidas o predictivas usando redes neuronales).
Para nuestro primer reto, extraer semántica, vamos a probar una representación continua llamada representación distribuida de palabras (distributed representations of words). Esta consiste en aprender representaciones vectoriales de palabras, es decir, vamos a tener un espacio multidimensional en el que una palabra es representada como un vector. Una de las cosas interesantes de estos vectores es que son capaces de extraer características tan relevantes como propiedades sintácticas y semánticas de las palabras (Turian et al, 2010). La otra es que este aprendizaje automático se realiza con datos de entrada no etiquetados, es decir, es no supervisado.
Estos vectores pueden ser utilizados como entrada de muchas aplicaciones de Procesamiento del Lenguaje Natural y Machine Learning. De hecho, es nuestro segundo reto utilizaremos estos vectores para intentar extraer temas de documentos.
Para aplicar esta técnica usamos la herramienta word2vec (Mikolov et al — Google, 2013), que utiliza como entrada un corpusde textos o documentos cualquiera, y obtener como salida vectores representando las palabras. La arquitectura en la que se basa word2vec utiliza redes neuronales para aprender estas representaciones. Aunque también se pueden obtener vectores que representen frases, párrafos o incluso documentos completos (Le and Mikolov, 2014).
Primero, utilizamos la implementación en Python de la herramienta word2vec, incluida en la librería gensim. Como entrada para generar los vectores tenemos dos datasets con documentos en castellano: Wikipedia y Yahoo! Answers (de este dataset, solo los que están en español).
El proceso es el siguiente (Figura 1), dado el conjunto de textos, se construye un vocabulario y word2vec aprende las representaciones vectoriales de palabras. Los algoritmos de aprendizaje que utiliza word2vec son: bag-of-words continuo y skip-gram continuo. Ambos algoritmos aprenden las representaciones de una palabra, las cuales son útiles para predecir otras palabras en la frase.
Como sabemos que los vectores capturan muchas regularidades lingüísticas, podemos aplicar operaciones vectoriales para extraer muchas propiedades interesantes. Por ejemplo, si queremos saber qué palabras son las más similares a una dada, buscamos cuales están más cerca aplicando ‘distancia coseno’ (cosine distance) o ‘similitud coseno’ (cosine similarity).
Por ejemplo, con el modelo de Wikipedia, qué cinco palabras se parecen más a una dada.
También podemos obtener qué seis palabras se parecen más a dos dadas con el modelo de la Wikipedia y el de Yahoo para ver las diferencias:
Otra propiedad interesante es que las operaciones vectoriales: vector(rey) — vector(hombre) + vector(mujer) nos da como resultado un vector muy cercano a vector(reina).
Por ejemplo, vector(pareja) — vector(hombre) + vector(novio) nos da como resultado estos vectores:
Al haber trabajado con dos conjuntos de datos diferentes, Wikipedia y Yahoo! answers, podemos crear dos espacios de representaciones vectoriales ligeramente diferentes con respecto al vocabulario usado y la semántica inherente en ellos. En el de Yahoo! encontramos entre las palabras más similares la misma palabra mal escrita de diferentes maneras. En wikipedia esto no pasa, pues la escritura es mucho más correcta.
Además, en el conjunto de Yahoo! tenemos no sólo preguntas en castellano, sino que también encontramos otras en mejicano, argentino y otros dialectos de sudamérica. Esto nos permite encontrar palabras similares en diferentes dialectos.
Con respecto al tiempo que tardamos en crear nuestro espacio de vectores, la mayoría del tiempo se dedica al preprocesamiento y limpieza de esos documentos. La implementación de gensim permite modificar los parámetros de creación del modelo e incluso utilizar varios workers con Cython para que el entrenamiento sea más rápido. La calidad de estos vectores dependerá de la cantidad de datos de entrenamiento, del tamaño de los vectores y del algoritmo elegido para entrenar. Para obtener mejores resultados, es necesario entrenar los modelos con datasets grandes y con suficiente dimensionalidad. Para más detalles os recomendamos la lectura del trabajo de Mikolov y Le.
En la siguiente tabla os mostramos el tiempo que se tarda aproximadamente en entrenar unos 500 MB de datos, suficientes para obtener un buen modelo de vectores. El tiempo total es el tiempo que tardamos en preprocesar los datos, entrenar y guardar el modelo para posteriores usos.
Para usar a representaciones vectoriales de documentos hemos utilizado doc2vec, también de gensim. Como entrada de datos, hemos considerado documento como una página de la wikipedia o una pregunta de yahoo con sus respuestas. Hemos variado el tamaño del fichero de entrada (de 100.000 documentos a 258.088) para un worker y una dimensión de 300 y el tiempo de entrenamiento se reduce bastante, lo podemos ver en la siguiente tabla:
Los tests ejecutados para ver el comportamiento del espacio de vectores no han sido tan satisfactorios como con word2vec. Los resultados para palabras similares son peores que con word2vec y para encontrar documentos similares a uno dado, vemos que no devuelve nada con mucho sentido.
Como alternativa buscamos otros métodos que nos puedan decir qué documentos son parecidos entre sí. Os los presentaremos en el siguiente post.
Word2vec es considerado como un método inspirado en deep learning (recomiendo la lectura de este artículo para aclarar conceptos) en ciertos grupos de especialistas en la materia y no tanto ‘deep learning’, sino ‘shallow learning’ en otros grupos. Sea como sea, la creación de espacios vectoriales para extraer propiedades sintácticas y semánticas de las palabras, de manera automática y no supervisada, nos abre todo un mundo de posibilidades a explorar. Estos vectores sirven de entrada para muchas aplicaciones como traducción automática, clusterización, categorización, e incluso puede ser entrada de otros modelos basados en deep learning. Y es que además de aplicarse al lenguaje natural, se está aplicando también en imágenes y reconocimiento de voz.
Ya que doc2vec no nos ha gustado mucho, nuestro siguiente paso es aplicar estos espacios vectoriales a extraer temas y categorías de documentos con técnicas habituales en el mundo del Procesamiento del Lenguaje Natural y de Machine Learning como tf-idf. De ello hablaremos en un siguiente post.
El corpus de datos de Yahoo (L6 — Yahoo! Answers Comprehensive Questions and Answers version 1.0 (multi part)) ha sido obtenido gracias a Yahoo! Webscope. Para procesar estos datos hemos utilizado la librería gensim para Python que implementa word2vec.
Fuente imagen principal: freedigitalphotos.net / kangshutters
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Conversational interfaces expert, indie maker, product manager & entrepreneur. #VoiceFirst, #chatbots, #AI, #NLProc. Creating future concepts at @monoceros_xyz
Innovative Knowledge
|
Arthur Juliani | 9K | 6 | https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------0---------------- | Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks | For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead).
Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here.
For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide.
In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly.
We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this:
This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment:
(Thanks to Praneet D for finding the optimal hyperparameters for this approach)
Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values.
In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above.
Below is the Tensorflow walkthrough of implementing our simple Q-Network:
While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms!
If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated!
If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani.
More from my Simple Reinforcement Learning with Tensorflow series:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Deep Learning @Unity3D & Cognitive Neuroscience PhD student.
Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
|
Andrej Karpathy | 9.2K | 7 | https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------1---------------- | Yes you should understand backprop – Andrej Karpathy – Medium | When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards:
This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to:
> The problem with Backpropagation is that it is a leaky abstraction.
In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways.
We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy):
If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule.
Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones.
TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video.
Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include:
If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time.
TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video.
Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero):
This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop.
What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead.
TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video.
Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt:
If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good.
The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass.
The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square:
It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple.
I submitted an issue on the DQN repo and this was promptly fixed.
Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks.
The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding.
That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :)
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets.
|
Arthur Juliani | 3.5K | 8 | https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2?source=tag_archive---------2---------------- | Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C) | In this article I want to provide a tutorial on implementing the Asynchronous Advantage Actor-Critic (A3C) algorithm in Tensorflow. We will use it to solve a simple challenge in a 3D Doom environment! With the holidays right around the corner, this will be my final post for the year, and I hope it will serve as a culmination of all the previous topics in the series. If you haven’t yet, or are new to Deep Learning and Reinforcement Learning, I suggest checking out the earlier entries in the series before going through this post in order to understand all the building blocks which will be utilized here. If you have been following the series: thank you! I have learned so much about RL in the past year, and am happy to have shared it with everyone through this article series.
So what is A3C? The A3C algorithm was released by Google’s DeepMind group earlier this year, and it made a splash by... essentially obsoleting DQN. It was faster, simpler, more robust, and able to achieve much better scores on the standard battery of Deep RL tasks. On top of all that it could work in continuous as well as discrete action spaces. Given this, it has become the go-to Deep RL algorithm for new challenging problems with complex state and action spaces. In fact, OpenAI just released a version of A3C as their “universal starter agent” for working with their new (and very diverse) set of Universe environments.
Asynchronous Advantage Actor-Critic is quite a mouthful. Let’s start by unpacking the name, and from there, begin to unpack the mechanics of the algorithm itself.
Asynchronous: Unlike DQN, where a single agent represented by a single neural network interacts with a single environment, A3C utilizes multiple incarnations of the above in order to learn more efficiently. In A3C there is a global network, and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with it’s own copy of the environment at the same time as the other agents are interacting with their environments. The reason this works better than having a single agent (beyond the speedup of getting more work done), is that the experience of each agent is independent of the experience of the others. In this way the overall experience available for training becomes more diverse.
Actor-Critic: So far this series has focused on value-iteration methods such as Q-learning, or policy-iteration methods such as Policy Gradient. Actor-Critic combines the benefits of both approaches. In the case of A3C, our network will estimate both a value function V(s) (how good a certain state is to be in) and a policy π(s) (a set of action probability outputs). These will each be separate fully-connected layers sitting at the top of the network. Critically, the agent uses the value estimate (the critic) to update the policy (the actor) more intelligently than traditional policy gradient methods.
Advantage: If we think back to our implementation of Policy Gradient, the update rule used the discounted returns from a set of experiences in order to tell the agent which of its actions were “good” and which were “bad.” The network was then updated in order to encourage and discourage actions appropriately.
The insight of using advantage estimates rather than just discounted returns is to allow the agent to determine not just how good its actions were, but how much better they turned out to be than expected. Intuitively, this allows the algorithm to focus on where the network’s predictions were lacking. If you recall from the Dueling Q-Network architecture, the advantage function is as follow:
Since we won’t be determining the Q values directly in A3C, we can use the discounted returns (R) as an estimate of Q(s,a) to allow us to generate an estimate of the advantage.
In this tutorial, we will go even further, and utilize a slightly different version of advantage estimation with lower variance referred to as Generalized Advantage Estimation.
In the process of building this implementation of the A3C algorithm, I used as reference the quality implementations by DennyBritz and OpenAI. Both of which I highly recommend if you’d like to see alternatives to my code here. Each section embedded here is taken out of context for instructional purposes, and won’t run on its own. To view and run the full, functional A3C implementation, see my Github repository.
The general outline of the code architecture is:
The A3C algorithm begins by constructing the global network. This network will consist of convolutional layers to process spatial dependencies, followed by an LSTM layer to process temporal dependencies, and finally, value and policy output layers. Below is example code for establishing the network graph itself.
Next, a set of worker agents, each with their own network and environment are created. Each of these workers are run on a separate processor thread, so there should be no more workers than there are threads on your CPU.
~ From here we go asynchronous ~
Each worker begins by setting its network parameters to those of the global network. We can do this by constructing a Tensorflow op which sets each variable in the local worker network to the equivalent variable value in the global network.
Each worker then interacts with its own copy of the environment and collects experience. Each keeps a list of experience tuples (observation, action, reward, done, value) that is constantly added to from interactions with the environment.
Once the worker’s experience history is large enough, we use it to determine discounted return and advantage, and use those to calculate value and policy losses. We also calculate an entropy (H) of the policy. This corresponds to the spread of action probabilities. If the policy outputs actions with relatively similar probabilities, then entropy will be high, but if the policy suggests a single action with a large probability then entropy will be low. We use the entropy as a means of improving exploration, by encouraging the model to be conservative regarding its sureness of the correct action.
A worker then uses these losses to obtain gradients with respect to its network parameters. Each of these gradients are typically clipped in order to prevent overly-large parameter updates which can destabilize the policy.
A worker then uses the gradients to update the global network parameters. In this way, the global network is constantly being updated by each of the agents, as they interact with their environment.
Once a successful update is made to the global network, the whole process repeats! The worker then resets its own network parameters to those of the global network, and the process begins again.
To view the full and functional code, see the Github repository here.
The robustness of A3C allows us to tackle a new generation of reinforcement learning challenges, one of which is 3D environments! We have come a long way from multi-armed bandits and grid-worlds, and in this tutorial, I have set up the code to allow for playing through the first VizDoom challenge. VizDoom is a system to allow for RL research using the classic Doom game engine. The maintainers of VizDoom recently created a pip package, so installing it is as simple as:
pip install vizdoom
Once it is installed, we will be using the basic.wad environment, which is provided in the Github repository, and needs to be placed in the working directory.
The challenge consists of controlling an avatar from a first person perspective in a single square room. There is a single enemy on the opposite side of the room, which appears in a random location each episode. The agent can only move to the left or right, and fire a gun. The goal is to shoot the enemy as quickly as possible using as few bullets as possible. The agent has 300 time steps per episode to shoot the enemy. Shooting the enemy yields a reward of 1, and each time step as well as each shot yields a small penalty. After about 500 episodes per worker agent, the network learns a policy to quickly solve the challenge. Feel free to adjust parameters such as learning rate, clipping magnitude, update frequency, etc. to attempt to achieve ever greater performance or utilize A3C in your own RL tasks.
I hope this tutorial has been helpful to those new to A3C and asynchronous reinforcement learning! Now go forth and build AIs.
(There are a lot of moving parts in A3C, so if you discover a bug, or find a better way to do something, please don’t hesitate to bring it up here or in the Github. I am more than happy to incorporate changes and feedback to improve the algorithm.)
If you’d like to follow my writing on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on twitter @awjuliani.
If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated!
More from my Simple Reinforcement Learning with Tensorflow series:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Deep Learning @Unity3D & Cognitive Neuroscience PhD student.
Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
|
Rohan Kapur | 1K | 30 | https://ayearofai.com/rohan-lenny-1-neural-networks-the-backpropagation-algorithm-explained-abf4609d4f9d?source=tag_archive---------3---------------- | Rohan & Lenny #1: Neural Networks & The Backpropagation Algorithm, Explained | In Rohan’s last post, he talked about evaluating and plugging holes in his knowledge of machine learning thus far. The backpropagation algorithm — the process of training a neural network — was a glaring one for both of us in particular. Together, we embarked on mastering backprop through some great online lectures from professors at MIT & Stanford. After attempting a few programming implementations and hand solutions, we felt equipped to write an article for AYOAI — together.
Today, we’ll do our best to explain backpropagation and neural networks from the beginning. If you have an elementary understanding of differential calculus and perhaps an intuition of what machine learning is, we hope you come out of this blog post with an (acute, but existent nonetheless) understanding of neural networks and how to train them. Let us know if we succeeded!
Let’s start off with a quick introduction to the concept of neural networks. Fundamentally, neural networks are nothing more than really good function approximators — you give a trained network an input vector, it performs a series of operations, and it produces an output vector. To train our network to estimate an unknown function, we give it a collection of data points — which we denote the “training set” — that the network will learn from and generalize on to make future inferences.
Neural networks are structured as a series of layers, each composed of one or more neurons (as depicted above). Each neuron produces an output, or activation, based on the outputs of the previous layer and a set of weights.
When using a neural network to approximate a function, the data is forwarded through the network layer-by-layer until it reaches the final layer. The final layer’s activations are the predictions that the network actually makes.
All this probably seems kind of magical, but it actually works. The key is finding the right set of weights for all of the connections to make the right decisions (this happens in a process known as training) — and that’s what most of this post is going to be about.
When we’re training the network, it’s often convenient to have some metric of how good or bad we’re doing; we call this metric the cost function. Generally speaking, the cost function looks at the function the network has inferred and uses it to estimate values for the data points in our training set. The discrepancies between the outputs in the estimations and the training set data points are the principle values for our cost function. When training our network, the goal will be to get the value of this cost function as low as possible (we’ll see how to do that in just a bit, but for now, just focus on the intuition of what a cost function is and what it’s good for). Generally speaking, the cost function should be more or less convex, like so:
In reality, it’s impossible for any network or cost function to be truly convex. However, as we’ll soon see, local minima may not be a big deal, as long as there is still a general trend for us to follow to get to the bottom. Also, notice that the cost function is parameterized by our network’s weights — we control our loss function by changing the weights.
One last thing to keep in mind about the loss function is that it doesn’t just have to capture how correctly your network estimates — it can specify any objective that needs to be optimized. For example, you generally want to penalize larger weights, as they could lead to overfitting. If this is the case, simply adding a regularization term to your cost function that expresses how big your weights will mean that, in the process of training your network, it will look for a solution that has the best estimates possible while preventing overfitting.
Now, let’s take a look at how we can actually minimize the cost function during the training process to find a set of weights that work the best for our objective.
Now that we’ve developed a metric for “scoring” our network (which we’ll denote as J(W)), we need to find the weights that will make that score as low as possible. If you think back to your pre-calculus days, your first instinct might be to set the derivative of the cost function to zero and solve, which would give us the locations of every minimum/maximum in the function. Unfortunately, there are a few problems with this approach:
Especially as the size of networks begins to scale up, solving for the weights directly becomes increasingly infeasible. Instead, we look at a different class of algorithms, called iterative optimization algorithms, that progressively work their way towards the optimal solution.
The most basic of these algorithms is gradient descent. Recall that our cost function will be essentially convex, and we want to get as close as possible to the global minimum. Instead of solving for it analytically, gradient descent follows the derivatives to essentially “roll” down the slope until it finds its way to the center.
Let’s take the example of a single-weight neural network, whose cost function is depicted below.
We start off by initializing our weight randomly, which puts us at the red dot on the diagram above. Taking the derivative, we see the slope at this point is a pretty big positive number. We want to move closer to the center — so naturally, we should take a pretty big step in the opposite direction of the slope.
If we repeat the process enough, we soon find ourselves nearly at the bottom of our curve and much closer to the optimal weight configuration for our network.
More formally, gradient descent looks something like this:
Let’s dissect. Every time we want to update our weights, we subtract the derivative of the cost function w.r.t. the weight itself, scaled by a learning rate , and — that’s it! You’ll see that as it gets closer and closer to the center, the derivative term gets smaller and smaller, converging to zero as it approaches the solution. The same process applies with networks that have tens, hundreds, thousands, or more parameters — compute the gradient of the cost function w.r.t. each of the weights, and update each of your weights accordingly.
I do want to say a few more words on the learning rate, because it’s one of the more important hyperparameters (“settings” for your neural network) that you have control over. If the learning rate is too high, it could jump too far in the other direction, and you never get to the minimum you’re searching for. Set it too low, and your network will take ages to find the right weights, or it will get stuck in a local minimum. There’s no “magic number” to use when it comes to a learning rate, and it’s usually best to try several and pick the one that works the best for your individual network and dataset. In practice, many choose to anneal the learning rate over time — it starts out high, because it’s furthest from the solution, and decays as it gets closer.
But as it turns out, gradient descent is kind of slow. Really slow, actually. Earlier I used the analogy of the weights “rolling” down the gradient to get to the bottom, but that doesn’t actually make any sense — it should pick up speed as it gets to the bottom, not slow down! Another iterative optimization algorithm, known as momentum, does just that. As the weights begin to “roll” down the slope, they pick up speed. When they get closer to the solution, the momentum that they picked up carries them closer to the optima while gradient descent would simply stop. As a result, training with momentum updates is both faster and can provide better results.
Here’s what the update rule looks like for momentum:
As we train, we accumulate a “velocity” value V. At each training step, we update V with the gradient at the current position (once again scaled by the learning rate). Also notice that, with each time step, we decay velocity V by a factor mu (usually somewhere around .9), so that over time we lose momentum instead of bouncing around by the minimum forever. We then update our weight in the direction of the velocity, and repeat the process again. Over the first few training iterations, V will grow as our weights “pick up speed” and take successively bigger leaps. As we approach the minimum, our velocity stops accumulating as quickly, and eventually begins to decay, until we’ve essentially reached the minimum. An important thing to note is that we accumulate a velocity independently for each weight — just because one weight is changing particularly clearly doesn’t mean any of the other weights need to be.
There are lots of other iterative optimization algorithms that are commonly used with neural networks, but I won’t go into all of them here (if you’re curious, some of the more popular ones include Adagrad and Adam). The basic principle remains the same throughout — gradually update the weights to get them closer to the minimum. But regardless of which optimization algorithm you use, we still need to be able to compute the gradient of the cost function w.r.t. each weight. But our cost function isn’t a simple parabola anymore — it’s a complicated, many-dimensional function with countless local optima that we need to watch out for. That’s where backpropagation comes in.
The backpropagation algorithm was a major milestone in machine learning because, before it was discovered, optimization methods were extremely unsatisfactory. One popular method was to perturb (adjust) the weights in a random, uninformed direction (ie. increase or decrease) and see if the performance of the ANN increased. If it did not, one would attempt to either a) go in the other direction b) reduce the perturbation size or c) a combination of both. Another attempt was to use Genetic Algorithms (which became popular in AI at the same time) to evolve a high-performance neural network. In both cases, without (analytically) being informed on the correct direction, results and efficiency were suboptimal. This is where the backpropagation algorithm comes into play.
Recall that, for any given supervised machine learning problem, we (aim to) select weights that provide the optimal estimation of a function that models our training data. In other words, we want to find a set of weights W that minimizes on the output of J(W). We discussed the gradient descent algorithm — one where we update each weight by some negative, scalar reduction of the error derivative with respect to that weight. If we do choose to use gradient descent (or almost any other convex optimization algorithm), we need to find said derivatives in numerical form.
For other machine learning algorithms like logistic regression or linear regression, computing the derivatives is an elementary application of differentiation. This is because the outputs of these models are just the inputs multiplied by some chosen weights, and at most fed through a single activation function (the sigmoid function in logistic regression). The same, however, cannot be said for neural networks. To demonstrate this, here is a diagram of a double-layered neural network:
As you can see, each neuron is a function of the previous one connected to it. In other words, if one were to change the value of w1, both “hidden 1” and “hidden 2” (and ultimately the output) neurons would change. Because of this notion of functional dependencies, we can mathematically formulate the output as an extensive composite function:
And thus:
Here, the output is a composite function of the weights, inputs, and activation function(s). It is important to realize that the hidden units/nodes are simply intermediary computations that, in actuality, can be reduced down to computations of the input layer.
If we were to then take the derivative of said function with respect to some arbitrary weight (for example w1), we would iteratively apply the chain rule (which I’m sure you all remember from your calculus classes). The result would look similar to the following:
Now, let’s attach a black box to the tail of our neural network. This black box will compute and return the error — using the cost function — from our output:
All we’ve done is add another functional dependency; our error is now a function of the output and hence a function of the input, weights, and activation function. If we were to compute the derivative of the error with any arbitrary weight (again, we’ll choose w1), the result would be:
Each of these derivatives can be simplified once we choose an activation and error function, such that the entire result would represent a numerical value. At that point, any abstraction has been removed, and the error derivative can be used in gradient descent (as discussed earlier) to iteratively improve upon the weight. We compute the error derivatives w.r.t. every other weight in the network and apply gradient descent in the same way. This is backpropagation — simply the computation of derivatives that are fed to a convex optimization algorithm. We call it “backpropagation” because it almost seems as if we are traversing from the output error to the weights, taking iterative steps using chain the rule until we “reach” our weight.
When I first truly understood the backprop algorithm (just a couple of weeks ago), I was taken aback by how simple it was. Sure, the actual arithmetic/computations can be difficult, but this process is handled by our computers. In reality, backpropagation is just a rather tedious (but again, for a generalized implementation, computers will handle this) application of the chain rule. Since neural networks are convoluted multilayer machine learning model structures (at least relative to other ones), each weight “contributes” to the overall error in a more complex manner, and hence the actual derivatives require a lot of effort to produce. However, once we get past the calculus, backpropagation of neural nets is equivalent to typical gradient descent for logistic/linear regression.
Thus far, I’ve walked through a very abstract form of backprop for a simple neural network. However, it is unlikely that you will ever use a single-layered ANN in applications. So, now, let’s make our black boxes — the activation and error functions — more concrete such that we can perform backprop on a multilayer neural net.
Recall that our error function J(W) will compute the “error” of our neural network based on the output predictions it produces vs. the correct a priori outputs we know in our training set. More formally, if we denote our predicted output estimations as vector p, and our actual output as vector a, then we can use:
This is just one example of a possible cost function (the log-likelihood is also a popular one), and we use it because of its mathematical convenience (this is a notion one will frequently encounter in machine learning): the squared expression exaggerates poor solutions and ensures each discrepancy is positive. It will soon become clear why we multiply the expression by half.
The derivative of the error w.r.t. the output was the first term in the error w.r.t. weight derivative expression we formulated earlier. Let’s now compute it!
Our result is simply our predictions take away our actual outputs.
Now, let’s move on to the activation function. The activation function used depends on the context of the neural network. If we aren’t in a classification context, ReLU (Rectified Linear Unit, which is zero if input is negative, and the identity function when the input is positive) is commonly used today.
If we’re in a classification context (that is, predicting on a discrete state with a probability ie. if an email is spam), we can use the sigmoid or tanh (hyperbolic tangent) function such that we can “squeeze” any value into the range 0 to 1. These are used instead of a typical step function because their “smoothness” properties allows for the derivatives to be non-zero. The derivative of the step function before and after the origin is zero. This will pose issues when we try to update our weights (nothing much will happen!).
Now, let’s say we’re in a classification context and we choose to use the sigmoid function, which is of the following equation:
As per usual, we’ll compute the derivative using differentiation rules as:
EDIT: On the 2nd line, the denominator should be raised to +2, not -2. Thanks to a reader for pointing this out.
Sidenote: ReLU activation functions are also commonly used in classification contexts. There are downsides to using the sigmoid function — particularly the “vanishing gradient” problem — which you can read more about here.
The sigmoid function is mathematically convenient (there it is again!) because we can represent its derivative in terms of the output of the function. Isn’t that cool‽
We are now in a good place to perform backpropagation on a multilayer neural network. Let me introduce you to the net we are going to work with:
This net is still not as complex as one you may use in your programming, but its architecture allows us to nevertheless get a good grasp on backprop. In this net, we have 3 input neurons and one output neuron. There are four layers in total: one input, one output, and two hidden layers. There are 3 neurons in each hidden layer, too (which, by the way, need not be the case). The network is fully connected; there are no missing connections. Each neuron/node (save the inputs, which are usually pre-processed anyways) is an activity; it is the weighted sum of the previous neurons’ activities applied to the sigmoid activation function.
To perform backprop by hand, we need to introduce the different variables/states at each point (layer-wise) in the neural network:
It is important to note that every variable you see here is a generalization on the entire layer at that point. For example, when I say x_i, I am referring to the input to any input neuron (arbitrary value of i). I chose to place it in the middle of the layer for visibility purposes, but that does not mean that x_i refers to the middle neuron. I’ll demonstrate and discuss the implications of this later on.
x refers to the input layer, y refers to hidden layer 1, z refers to hidden layer 2, and p refers to the prediction/output layer (which fits in nicely with the notation used in our cost function). If a variable has the subscript i, it means that the variable is the input to the relevant neuron at that layer. If a variable has the subscript j, it means that the variable is the output of the relevant neuron at that layer. For example, x_i refers to any input value we enter into the network. x_j is actually equal to x_i, but this is only because we choose not to use an activation function — or rather, we use the identity activation function — in the input layer’s activities. We only include these two separate variables to retain consistency. y_i is the input to any neuron in the first hidden layer; it is the weighted sum of all previous neurons (each neuron in the input layer multiplied by the corresponding connecting weights). y_j is the output of any neuron at the hidden layer, so it is equal to activation_function(y_i) = sigmoid(y_i) = sigmoid(weighted_sum_of_x_j). We can apply the same logic for z and p. Ultimately, p_j is the sigmoid output of p_i and hence is the output of the entire neural network that we pass to the error/cost function.
The weights are organized into three separate variables: W1, W2, and W3. Each W is a matrix (if you are not comfortable with Linear Algebra, think of a 2D array) of all the weights at the given layer. For example, W1 are the weights that connect the input layer to the hidden layer 1. Wlayer_ij refers to any arbitrary, single weight at a given layer. To get an intuition of ij (which is really i, j), Wlayer_i are all the weights that connect arbitrary neuron i at a given layer to the next layer. Wlayer_ij (adding the j component) is the weight that connects arbitrary neuron i at a given layer to an arbitrary neuron j at the next layer. Essentially, Wlayer is a vector of Wlayer_is, which is a vector of real-valued Wlayer_ijs.
NOTE: Please note that the i’s and j’s in the weights and other variables are completely different. These indices do not correspond in any way. In fact, for x/y/z/p, i and j do not represent tensor indices at all, they simply represent the input and output of a neuron. Wlayer_ij represents an arbitrary weight at an index in a weight matrix, and x_j/y_j/z_j/p_j represent an arbitrary input/output point of a neuron unit.
That last part about weights was tedious! It’s crucial to understand how we’re separating the neural network here, especially the notion of generalizing on an entire layer, before moving forward.
To acquire a comprehensive intuition of backpropagation, we’re going to backprop this neural net as discussed before. More specifically, we’re going to find the derivative of the error w.r.t. an arbitrary weight in the input layer (W1_ij). We could find the derivative of the error w.r.t. an arbitrary weight in the first or second hidden layer, but let’s go as far back as we can; the more backprop, the better!
So, mathematically, we are trying to obtain (to perform our iterative optimization algorithm with):
We can express this graphically/visually, using the same principles as earlier (chain rule), like so:
In two layers, we have three red lines pointing in three different directions, instead of just one. This is a reinforcement of (and why it is important to understand) the fact that variable j is just a generalization/represents any point in the layer. So, when we differentiate p_i with respect to the layer before that, there are three different weights, as I hope you can see, in W3_ij that contribute to the value p_i. There also happen to be three weights in W3 in total, but this isn’t the case for the layers before; it is only the case because layer p has one neuron — the output — in it. We stop backprop at the input layer and so we just point to the single weight we are looking for.
Wonderful! Now let’s work out all this great stuff mathematically. Immediately, we know:
We have already established the left hand side, so now we just need to use the chain rule to simplify it further. The derivative of the error w.r.t. the weight can be written as the derivative of the error w.r.t. the output prediction multiplied by the derivative of the output prediction w.r.t. the weight. At this point, we’ve traversed one red line back. We know this because
is reducible to a numerical value. Specifically, the derivative of the error w.r.t. the output prediction is:
Hence:
Going one more layer backwards, we can determine that:
In other words, the derivative of the output prediction w.r.t. the weight is the derivative of the output w.r.t. the input to the output layer (p_i) multiplied by the derivative of that value w.r.t. the weight. This represents our second red line. We can solve the first term like so:
This corresponds with the derivative of the sigmoid function we solved earlier, which was equal to the output multiplied by one minus the output. In this case, p_j is the output of the sigmoid function. Now, we have:
Let’s move on to the third red line(s). This one is interesting because we begin to “spread” out. Since there are multiple different weights that contribute to the value of p_i, we need to take into account their individual “pull” factors into our derivative:
If you’re a mathematician, this notation may irk you slightly; sorry if that’s the case! In computer science, we tend to stray from the notion of completely legal mathematical expressions. This is yet again again another reason why it’s key to understand the role of layer generalization; z_j here is not just referring to the middle neuron, it’s referring to an arbitrary neuron. The actual value of j in the summation is not changing (it’s not even an index or a value in the first place), and we don’t really consider it. It’s less of a mathematical expression and more of a statement that we will iterate through each generalized neuron z_j and use it. In other words, we iterate over the derivative terms and sum them up using z_1, z_2, and z_3. Before, we could write p_j as any single value because the output layer just contains one node; there is just one p_j. But we see here that this is no longer the case. We have multiple z_j values, and p_i is functionally dependent on each of these z_j values. So, when we traverse from p_j to the preceding layer, we need to consider each contribution from layer z to p_j separately and add them up to create one total contribution. There’s no upper bound to the summation; we just assume that we start at zero and end at our maximum value for the number of neurons in the layer. Please again note that the same changes are not reflected in W1_ij, where j refers to an entirely different thing. Instead, we’re just stating that we will use the different z_j neurons in layer z.
Since p_i is a summation of each weight multiplied by each z_j (weighted sum), if we were to take the derivative of p_i with any arbitrary z_j, the result would be the connecting weight since said weight would be the coefficient of the term (derivative of m*x w.r.t. x is just m):
W3_ij is loosely defined here. ij still refers to any arbitrary weight — where ij are still separate from the j used in p_i/z_j — but again, as computer scientists and not mathematicians, we need not be pedantic about the legality and intricacy of expressions; we just need an intuition of what the expressions imply/mean. It’s almost a succinct form of psuedo-code! So, even though this defines an arbitrary weight, we know it means the connecting weight. We can also see this from the diagram: when we walk from p_j to an arbitrary z_j, we walk along the connecting weight. So now, we have:
At this point, I like to continue playing the “reduction test”. The reduction test states that, if we can further simplify a derivative term, we still have more backprop to do. Since we can’t yet quite put the derivative of z_j w.r.t. W1_ij into a numerical term, let’s keep going (and fast-forward a bit). Using chain rule, we follow the fourth line back to determine that:
Since z_j is the sigmoid of z_i, we use the same logic as the previous layer and apply the sigmoid derivative. The derivative of z_i w.r.t. W1_ij, demonstrated by the fifth line(s) back, requires the same idea of “spreading out” and summation of contributions:
Briefly, since z_i is the weighted sum of each y_j in y, we sum over the derivatives which, similar to before, simplifies to the relevant connecting weights in the preceding layer (W2 in this case).
We’re almost there, let’s go further; there’s still more reduction to do:
We have, of course, another sigmoid activation function to deal with. This is the sixth red line. Notice, now, that we have just one line remaining. In fact, our last derivative term here passes (or rather, fails) the reduction test! The last line traverses from the input at y_i to x_j, walking along W1_ij. Wait a second — is this not what we are attempting to backprop to? Yes, it is! Since we are, for the first time, directly deriving y_i w.r.t. the weight W1_ij, we can think of the coefficient of W1_ij as being x_j in our weighted sum (instead of the vice versa as used previously). Hence, the simplification follows:
Of course, since each x_j in layer x contributes to the weighted sum y_i, we sum over the effects. And that’s it! We can’t reduce any further from here. Now, let’s tie all these individual expressions together:
EDIT: The denominator on the left hand side should say dW1ij instead of “layer”.
With no more partial derivative terms left, our work is complete! This gives us the derivative of the error w.r.t. any arbitrary weight in the input layer/W1. That was a lot of work — maybe now we can sympathize with the poor computers!
Something you should notice is that values such as p_j, a, z_j, y_j, x_j etc. are the values of the network at the different points. But where do they come from? Actually, we would need to perform a feed-forward of the neural network first and then capture these variables.
Our task is to now perform Gradient Descent to train the neural net:
We perform gradient descent on each weight in each layer. Notice that the resulting gradient should change each time because the weight itself changes, (and as a result, the performance and output of the entire net should change) even if it’s a small perturbation. This means that, at each update, we need to do a feed-forward of the neural net. Not just once before, but once each iteration.
These are then the steps to train an entire neural network:
It’s important to note that one must not initialize the weights to zero, similar to what may be done in other machine learning algorithms. If weights are initialized to zero, after each update, the outgoing weights of each neuron will be identical, because the gradients will be identical (which can be proved). Because of this, the proceeding hidden units will remain the same value and will continue to follow each other. Ultimately, this means that our training will become extremely constrained (due to the “symmetry”), and we won’t be able to build interesting functions. Also, neural networks may get stuck at local optima (places where the gradient is zero but are not the global minima), so random weight initialization allows one to hopefully have a chance of circumventing that by starting at many different random values.
3. Perform one feed-forward using the training data
4. Perform backpropagation to get the error derivatives w.r.t. each and every weight in the neural network
5. Perform gradient descent to update each weight by the negative scalar reduction (w.r.t. some learning rate alpha) of the respective error derivative. Increment the number of iterations.
6. If we have converged (in reality, though, we just stop when we have reached the number of maximum iterations) training is complete. Else, repeat starting at step 3.
If we initialize our weights randomly (and not to zero) and then perform gradient descent with derivatives computed from backpropagation, we should expect to train a neural network in no time! I hope this example brought clarity to how backprop works and the intuition behind it. If you didn’t understand the intricacies of the example but understand and appreciate the concept of backprop as a whole, you’re still in a great place! Next we’ll go ahead and explain backprop code that works on any generalized architecture of a neural network using the ReLU activation function.
Now that we’ve developed the math and intuition behind backpropagation, let’s try to implement it. We’ll divide our implementation into three distinct steps:
Let’s start off by defining what the API we’re implementing looks like. We’ll define our network as a series of Layer instances that our data passes through — this means that instead of modeling each individual neuron, we group neurons from a single layer together. This makes it a bit easier to reason about larger networks, but also makes the actual computations faster (as we’ll see shortly). Also — we’re going to write the code in Python.
Each layer will have the following API:
(This isn’t great API design — ideally, we would decouple the backprop and weight update from the rest of the object, so the specific algorithm we use for updating weights isn’t tied to the layer itself. But that’s not the point, so we’ll stick with this design for the purposes of explaining how backpropagation works in a real-life scenario. Also: we’ll be using numpy throughout the implementation. It’s an awesome tool for mathematical operations in Python (especially tensor based ones), but we don’t have the time to get into how it works — if you want a good introduction, here ya’ go.)
We can start by implementing the weight initialization. As it turns out, how you initialize your weights is actually kind of a big deal for both network performance and convergence rates. Here’s how we’ll initialize our weights:
This initializes a weight matrix of the appropriate dimensions with random values sampled from a normal distribution. We then scale it rad(2/self.size_in), giving us a variance of 2/self.size_in (derivation here).
And that’s all we need for layer initialization! Let’s move on to implementing our first objective — feed-forward. This is actually pretty simple — a dot product of our input activations with the weight matrix, followed by our activation function, will give us the activations we need. The dot product part should make intuitive sense; if it doesn’t, you should sit down and try to work through it on a piece of paper. This is where the performance gains of grouping neurons into layers comes from: instead of keeping an individual weight vector for each neuron, and performing a series of vector dot products, we can just do a single matrix operation (which, thanks to the wonders of modern processors, is significantly faster). In fact, we can compute all of the activations from a layer in just two lines:
Simple enough. Let’s move on to backpropagation.
This one’s a bit more involved. First, we compute the derivative of the output w.r.t. the weights, then the derivative of the cost w.r.t. the output, followed by chain rule to get the derivative of the cost w.r.t. the weights.
Let’s start with the first part — the derivative of the output w.r.t. the weights. That should be simple enough; because you’re multiplying the weight by the corresponding input activation, the derivative will just be the corresponding input activation.
Except, because we’re using the ReLU activation function, the weights have no effect if the corresponding output is < 0 (because it gets capped anyway). This should take care of that hiccup:
(More formally, you’re multiplying by the derivative of the activation function, which is 0 when the activation is < 0 and 1 elsewhere.)
Let’s take a brief detour to talk about the out_grad parameter that our backward method gets. Let’s say we have a network with two layers: the first has m neurons, and the second has n. Each of the m neurons produces an activation, and each of the n neurons looks at each of the m activations. The out_grad parameter is an m x n matrix of how each m affects each of the n neurons it feeds into.
Now, we need the derivative of the cost w.r.t. each of the outputs — which is essentially the out_grad parameter we’re given! We just need to sum up each row of the matrix we’re given, as per the backpropagation formula.
Finally, we end up with something like this:
Now, we need to compute the derivative of our inputs to pass along to the next layer. We can perform a similar chain rule — derivative of the output w.r.t. the inputs times the derivative of the cost w.r.t. the outputs.
And that’s it for the backpropagation step.
The final step is the weight update. Assuming we’re sticking with gradient descent for this example, this can be a simple one-liner:
To actually train our network, we take one of our training samples and call forward on each layer consecutively, passing the output of the previous layer as the input of the following layer. We compute dJ, passing that as the out_grad parameter to the last layer’s backward method. We call backward on each of the layers in reverse order, this time passing the output of the further layer as out_grad to the previous layer. Finally, we call update on each of our layers and repeat.
There’s one last detail that we should include, which is the concept of a bias (akin to that of a constant term in any given equation). Notice that, with our current implementation, the activation of a neuron is determined solely based on the activations of the previous layer. There’s no bias term that can shift the activation up or down independent of the inputs. A bias term isn’t strictly necessary — in fact, if you train your network as-is, it would probably still work fine. But if you do need a bias term, the code stays almost the same — the only difference is that you need to add a column of 1s to the incoming activations, and update your weight matrix accordingly, so one of your weights gets treated as a bias term. The only other difference is that, when returning cost_wrt_inputs, you can cut out the first row — nobody cares about the gradients associated with the bias term because the previous layer has no say in the activation of the bias neuron.
Implementing backpropagation can be kind of tricky, so it’s often a good idea to check your implementation. You can do so by computing the gradient numerically (by literally perturbing the weight and calculating the difference in your cost function) and comparing it to your backpropagation-computed gradient. This gradient check doesn’t need to be run once you’ve verified your implementation, but it could save a lot of time tracking down potential problems with your network.
Nowadays, you often don’t even need to implement a neural network on your own, as libraries such as Caffe, Torch, or TensorFlow will have implementations ready to go. That being said, it’s often a good idea to try implementing it on your own to get a better grasp of how everything works under the hood.
Intrigued? Looking to learn more about neural networks? Here are some great online classes to get you started:
Stanford’s CS231n. Although it’s technically about convolutional neural networks, the class provides an excellent introduction to and survey of neural networks in general. Class videos, notes, and assignments are all posted here, and if you have the patience for it I would strongly recommend walking through the assignments so you can really get to know what you’re learning.
MIT 6.034. This class, taught by Prof. Patrick Henry Winston, explores many different algorithms and disciplines in Artificial Intelligence. There’s a great lecture on backprop that I actually used as a stepping stone to getting setup writing this article. I also learned genetic algorithms from Prof. Winston — he’s a great teacher!
We hope that, if you visited this article without knowing how the backpropagation algorithm works, you are reading this with an (at least rudimentary) mathematical or conceptual intuition of it. Writing and conveying such a complex algorithm to a supposed beginner has proven to be an extremely difficult task for us, but it’s helped us truly understand what we’ve been learning about. With greater knowledge in a fundamental area of machine learning, we are now excited to take a look at new, interesting algorithms and disciplines in the field. We are looking forward to continue documenting these endeavors together.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
rohankapur.com
Our ongoing effort to make the mathematics, science, linguistics, and philosophy of artificial intelligence fun and simple.
|
Per Harald Borgen | 1.3K | 7 | https://medium.com/learning-new-stuff/how-to-learn-neural-networks-758b78f2736e?source=tag_archive---------4---------------- | Learning How To Code Neural Networks – Learning New Stuff – Medium | This is the second post in a series of me trying to learn something new over a short period of time. The first time consisted of learning how to do machine learning in a week.
This time I’ve tried to learn neural networks. While I didn’t manage to do it within a week, due to various reasons, I did get a basic understanding of it throughout the summer and autumn of 2015.
By basic understanding, I mean that I finally know how to code simple neural networks from scratch on my own.
In this post, I’ll give a few explanations and guide you to the resources I’ve used, in case you’re interested in doing this yourself.
So what is a neural network? Let’s wait with the network part and start off with one single neuron.
The circle below illustrates an artificial neuron. Its input is 5 and its output is 1. The input is the sum of the three synapses connecting to the neuron (the three arrows at the left).
At the far left we see two input values plus a bias value. The input values are 1 and 0 (the green numbers), while the bias holds a value of -2 (the brown number).
The two inputs are then multiplied by their so called weights, which are 7 and 3 (the blue numbers).
Finally we add it up with the bias and end up with a number, in this case: 5 (the red number). This is the input for our artificial neuron.
The neuron then performs some kind of computation on this number — in our case the Sigmoid function, and then spits out an output. This happens to be 1, as Sigmoid of 5 equals to 1, if we round the number up (more info on the Sigmoid function follows later).
If you connect a network of these neurons together, you have a neural network, which propagates forward — from input output, via neurons which are connected to each other through synapses, like on the image to the left.
I can strongly recommend the Welch Labs videos on YouTube for getting a better intuitive explanation of this process.
After you’ve seen the Welch Labs videos, its a good idea to spend some time watching Week 4 of the Coursera’s Machine Learning course, which covers neural networks, as it’ll give you more intuition of how they work.
The course is fairly mathematical, and its based around Octave, while I prefer Python. Because of this, I did not do the programming exercises. Instead, I used the videos to help me understand what I needed to learn.
The first thing I realized I needed to investigate further was the Sigmoid function, as this seemed to be a critical part of many neural networks. I knew a little bit about the function, as it was also covered in Week 3 of the same course. So I went back and watched these videos again.
But watching videos won’t get you all the way. To really understand it, I felt I needed to code it from the ground up.
So I started to code a logistic regression algorithm from scratch (which happened to use the Sigmoid function).
It took a whole day, and it’s probably not a very good implementation of logistic regression. But that doesn’t matter, as I finally understood how it works. Check the code here.
You don’t need to perform this entire exercise yourself, as it requires some knowledge about and cost functions and gradient descent, which you might not have at this point.
But make sure you understand how the Sigmoid function works.
Understanding how a neural network works from input to output isn’t that difficult to understand, at least conceptually.
More difficult though, is understanding how the neural network actually learns from looking at a set of data samples.
The concept is called backpropagation.
The weights were the blue numbers on our neuron in the beginning of the article.
This process happens backwards, because you start at the end of the network (observe how wrong the networks ‘guess’ is), and then move backwards through the network, while adjusting the weights on the way, until you finally reach the inputs.
To calculate this by hand requires some calculus, as it involves getting some derivatives of the networks’ weights. The Kahn Academy calculus courses seems like a good way to start, though I haven’t used them myself, as I took calculus on university.
The three best sources I found for understanding backpropagation are these:
You should definitely code along while you’re reading the articles, especially the two first ones. It’ll give you some sample code to look back at when you’re confused in the future.
Plus, I can’t really emphasize this enough:
The third article is also fantastic, but I’ve used this more as a wiki than a plain tutorial, as it’s actually an entire book. It contains thorough explanations all the important concepts in neural networks.
These articles will also help you understand important concepts as cost functions and gradient descent, which play equally important roles in neural networks.
In some articles and tutorials you’ll actually end up coding small neural networks. As soon as you’re comfortable with that, I recommend you to go all in on this strategy. It’s both fun and an extremely effective way of learning.
One of the articles I also learned a lot from was A Neural Network in 11 Lines Of Python by IAmTrask. It contains an extraordinary amount of compressed knowledge and concepts in just 11 lines.
After you’ve coded along with this example, you should do as the article states at the bottom, which is to implement it once again without looking at the tutorial. This forces you to really understand the concepts, and will likely reveal holes in your knowledge, which isn’t fun. However, when you finally manage it, you’ll feel like you’ve just acquired a new superpower.
When you’ve done this, you can continue with this Wild ML tutorial, by Denny Britz, which guides you through a little more robust neural network.
At this point, you could either try and code your own neural network from scratch or start playing around with some of the networks you have coded up already. It’s great fun to find a dataset that interests you and try to make some predictions with your neural nets.
To get a hold of a dataset, just visit my side project Datasets.co (← shameless self promotion) and find one you like.
Anyway, the point is that you’re now better off experimenting with stuff that interests you rather than following my advices.
Personally, I’m currently learning how to use Python libraries that makes it easier to code up neural networks, like Theano, Lasagne and nolearn. I’m using this to do challenges on Kaggle, which is both great fun and great learning.
Good luck!
And don’t forget to press the heart button if you liked the article :)
Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code.
If you’ve read this far, I’d recommend you to check out this demo!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com.
A publication about improving your technical skills.
|
Shi Yan | 4.4K | 7 | https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714?source=tag_archive---------5---------------- | Understanding LSTM and its diagrams – ML Review – Medium | I just want to reiterate what’s said here:
I’m not better at explaining LSTM, I want to write this down as a way to remember it myself. I think the above blog post written by Christopher Olah is the best LSTM material you would find. Please visit the original link if you want to learn LSTM. (But I did create some nice diagrams.)
Although we don’t know how brain functions yet, we have the feeling that it must have a logic unit and a memory unit. We make decisions by reasoning and by experience. So do computers, we have the logic units, CPUs and GPUs and we also have memories.
But when you look at a neural network, it functions like a black box. You feed in some inputs from one side, you receive some outputs from the other side. The decision it makes is mostly based on the current inputs.
I think it’s unfair to say that neural network has no memory at all. After all, those learnt weights are some kind of memory of the training data. But this memory is more static. Sometimes we want to remember an input for later use. There are many examples of such a situation, such as the stock market. To make a good investment judgement, we have to at least look at the stock data from a time window.
The naive way to let neural network accept a time series data is connecting several neural networks together. Each of the neural networks handles one time step. Instead of feeding the data at each individual time step, you provide data at all time steps within a window, or a context, to the neural network.
A lot of times, you need to process data that has periodic patterns. As a silly example, suppose you want to predict christmas tree sales. This is a very seasonal thing and likely to peak only once a year. So a good strategy to predict christmas tree sale is looking at the data from exactly a year back. For this kind of problems, you either need to have a big context to include ancient data points, or you have a good memory. You know what data is valuable to remember for later use and what needs to be forgotten when it is useless.
Theoretically the naively connected neural network, so called recurrent neural network, can work. But in practice, it suffers from two problems: vanishing gradient and exploding gradient, which make it unusable.
Then later, LSTM (long short term memory) was invented to solve this issue by explicitly introducing a memory unit, called the cell into the network. This is the diagram of a LSTM building block.
At a first sight, this looks intimidating. Let’s ignore the internals, but only look at the inputs and outputs of the unit. The network takes three inputs. X_t is the input of the current time step. h_t-1 is the output from the previous LSTM unit and C_t-1 is the “memory” of the previous unit, which I think is the most important input. As for outputs, h_t is the output of the current network. C_t is the memory of the current unit.
Therefore, this single unit makes decision by considering the current input, previous output and previous memory. And it generates a new output and alters its memory.
The way its internal memory C_t changes is pretty similar to piping water through a pipe. Assuming the memory is water, it flows into a pipe. You want to change this memory flow along the way and this change is controlled by two valves.
The first valve is called the forget valve. If you shut it, no old memory will be kept. If you fully open this valve, all old memory will pass through.
The second valve is the new memory valve. New memory will come in through a T shaped joint like above and merge with the old memory. Exactly how much new memory should come in is controlled by the second valve.
On the LSTM diagram, the top “pipe” is the memory pipe. The input is the old memory (a vector). The first cross ✖ it passes through is the forget valve. It is actually an element-wise multiplication operation. So if you multiply the old memory C_t-1 with a vector that is close to 0, that means you want to forget most of the old memory. You let the old memory goes through, if your forget valve equals 1.
Then the second operation the memory flow will go through is this + operator. This operator means piece-wise summation. It resembles the T shape joint pipe. New memory and the old memory will merge by this operation. How much new memory should be added to the old memory is controlled by another valve, the ✖ below the + sign.
After these two operations, you have the old memory C_t-1 changed to the new memory C_t.
Now lets look at the valves. The first one is called the forget valve. It is controlled by a simple one layer neural network. The inputs of the neural network is h_t-1, the output of the previous LSTM block, X_t, the input for the current LSTM block, C_t-1, the memory of the previous block and finally a bias vector b_0. This neural network has a sigmoid function as activation, and it’s output vector is the forget valve, which will applied to the old memory C_t-1 by element-wise multiplication.
Now the second valve is called the new memory valve. Again, it is a one layer simple neural network that takes the same inputs as the forget valve. This valve controls how much the new memory should influence the old memory.
The new memory itself, however is generated by another neural network. It is also a one layer network, but uses tanh as the activation function. The output of this network will element-wise multiple the new memory valve, and add to the old memory to form the new memory.
These two ✖ signs are the forget valve and the new memory valve.
And finally, we need to generate the output for this LSTM unit. This step has an output valve that is controlled by the new memory, the previous output h_t-1, the input X_t and a bias vector. This valve controls how much new memory should output to the next LSTM unit.
The above diagram is inspired by Christopher’s blog post. But most of the time, you will see a diagram like below. The major difference between the two variations is that the following diagram doesn’t treat the memory unit C as an input to the unit. Instead, it treats it as an internal thing “Cell”.
I like the Christopher’s diagram, in that it explicitly shows how this memory C gets passed from the previous unit to the next. But in the following image, you can’t easily see that C_t-1 is actually from the previous unit. and C_t is part of the output.
The second reason I don’t like the following diagram is that the computation you perform within the unit should be ordered, but you can’t see it clearly from the following diagram. For example to calculate the output of this unit, you need to have C_t, the new memory ready. Therefore, the first step should be evaluating C_t.
The following diagram tries to represent this “delay” or “order” with dash lines and solid lines (there are errors in this picture). Dash lines means the old memory, which is available at the beginning. Some solid lines means the new memory. Operations require the new memory have to wait until C_t is available.
But these two diagrams are essentially the same. Here, I want to use the same symbols and colors of the first diagram to redraw the above diagram:
This is the forget gate (valve) that shuts the old memory:
This is the new memory valve and the new memory:
These are the two valves and the element-wise summation to merge the old memory and the new memory to form C_t (in green, flows back to the big “Cell”):
This is the output valve and output of the LSTM unit:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Software engineer & wantrepreneur. Interested in computer graphics, bitcoin and deep learning.
Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts.
|
Ross Goodwin | 686 | 23 | https://medium.com/artists-and-machine-intelligence/adventures-in-narrated-reality-6516ff395ba3?source=tag_archive---------6---------------- | Adventures in Narrated Reality – Artists and Machine Intelligence – Medium | By Ross Goodwin
In May 2015, Stanford PhD student Andrej Karpathy wrote a blog post entitled The Unreasonable Effectiveness of Recurrent Neural Networks and released a code repository called Char-RNN. Both received quite a lot of attention from the machine learning community in the months that followed, spurring commentary and a number of response posts from other researchers.
I remember reading these posts early last summer. Initially, I was somewhat underwhelmed—as at least one commentator pointed out, much of the generated text that Karpathy chose to highlight did not seem much better than results one might expect from high order character-level Markov chains.
Here is a snippet of Karpathy’s Char-RNN generated Shakespeare:
And here is a snippet of generated Shakespeare from a high order character-level Markov chain, via the post linked above:
So I was discouraged. And without access to affordable GPUs for training recurrent neural networks, I continued to experiment with Markov chains, generative grammars, template systems, and other ML-free solutions for generating text.
In December, New York University was kind enough to grant me access to their High Performance Computing facilities. I began to train my own recurrent neural networks using Karpathy’s code, and I finally discovered the quasi-magical capacities of these machines. Since then, I have been training a collection of recurrent neural network models for my thesis project at NYU, and exploring possibilities for devices that could enable such models to serve as expressive real-time narrators in our everyday lives.
At this point, since this is my very first Medium post, perhaps I should introduce myself: my name is Ross Goodwin, I’m a graduate student at NYU ITP in my final semester, and computational creative writing is my personal obsession.
Before I began my studies at ITP, I was a political ghostwriter. I graduated from MIT in 2009 with a B.S. degree in Economics, and during my undergraduate years I had worked on Barack Obama’s 2008 Presidential campaign. At the time, I wanted to be a political speechwriter, and my first job after graduation was a Presidential Writer position at the White House. In this role, I wrote Presidential Proclamations, which are statements of national days, weeks, and months of things—everything from Thanksgiving and African American History Month to lesser known observances like Safe Boating Week. It was a very strange job, but I thoroughly enjoyed it.
I left the White House in 2010 for a position at the U.S. Department of the Treasury, where I worked for two years, mostly putting together briefing binders for then-Secretary Timothy Geithner and Deputy Secretary Neal Wolin in the Department’s front office. I didn’t get many speechwriting opportunities, and pursuing a future in the financial world did not appeal to me, so I left to work as a freelance ghostwriter.
This was a rather dark time in my life, as I rapidly found myself writing for a variety of unsavory clients and causes in order to pay my rent every month. In completing these assignments, I began to integrate algorithms into my writing process to improve my productivity. (At the time, I didn’t think about these techniques as algorithmic, but it’s obvious in retrospect.) For example, if I had to write 12 letters, I’d write them in a spreadsheet with a paragraph in each cell. Each letter would exist in a column, and I would write across the rows—first I’d write all the first paragraphs as one group, then all the second paragraphs, then all the thirds, and so on. If I had to write a similar group of letters the next day for the same client, I would use an Excel macro to randomly shuffle the cells, then edit the paragraphs for cohesion and turn the results in as an entirely new batch of letters.
Writing this way, I found I could complete an 8-hour day of work in about 2 hours. I used the rest of my time to work on a novel that’s still not finished (but that’s a story for another time). With help from some friends, I turned the technique into a game we called The Diagonalization Argument after Georg Cantor’s 1891 mathematical proof of the same name.
In early 2014, a client asked me to write reviews of all the guides available online to learn the Python programming language. One guide stood out above all others, in the sheer number of times I saw users reference it on various online forums and in the countless glowing reviews it had earned across the Internet: Learn Python the Hard Way by Zed Shaw
So, to make my reviews better, I decided I might as well try to learn Python. My past attempts at learning to code had failed due to lack of commitment, lack of interest, or lack of a good project to get started. But this time was different somehow—Zed’s guide worked for me, and just like that I found myself completely and hopelessly addicted to programming.
As a writer, I gravitated immediately to the broad and expanding world of natural language processing and generation. My first few projects were simple poetry generators. And once I moved to New York City and started ITP, I discovered a local community of likeminded individuals leveraging computation to produce and enhance textual work. I hosted a Code Poetry Slam in November 2014 and began attending Todd Anderson’s monthly WordHack events at Babycastles.
In early 2015, I developed and launched word.camera, a web app and set of physical devices that use the Clarifai API to tag images with nouns, ConceptNet to find related words, and a template system to string the results together into descriptive (though often bizarre) prose poems related to the captured photographs. The project was about redefining the photographic experience, and it earned more attention than I expected [1,2,3]. In November, I was invited to exhibit this work at IDFA DocLab in Amsterdam.
At that point, it became obvious that word.camera (or some extension thereof) would become my ITP thesis project. And while searching for ways to improve its output, I began to experiment with training my own neural networks rather than using those others had trained via APIs.
As I mentioned above, I started using NYU’s High Performance Computing facilities in December. This supercomputing cluster includes a staggering array of computational resources — in particular, at least 32 Nvidia Tesla K80 GPUs, each with 24 GB of GPU memory. While GPUs aren’t strictly required to train deep neural networks, the massively parallel processes involved make them all but a necessity for training a larger model that will perform well in a reasonable amount of time.
Using two of Andrej Karpathy’s repositories, NeuralTalk2 and Char-RNN respectively, I trained an image captioning model and a number of models for generating text. As a result of having free access to the largest GPUs in the world, I was able to start training very large models right away.
NeuralTalk2 uses a convolutional neural network to classify images, then transfers that classification data to a recurrent neural network that generates a brief caption. For my first attempt at training a NeuralTalk2 model, I wanted to do something less traditional than simply captioning images.
In my opinion, the idea of machine “image captioning” is problematic because it’s so limited in scope. Fundamentally, a machine that can caption images is a machine that can describe or relate to what it sees in a highly intelligent way. I do understand that image captioning is an important benchmark for machine intelligence. However, I also believe that thinking such a machine’s primary use case will be to replace human image captioning represents a highly restrictive and narrow point of view.
So I tried training a model on frames and corresponding captions from every episode of the TV show The X-Files. My idea was to create a model that, if given an image, would generate a plausible line of dialogue from what it saw.
Unfortunately, it simply did not work—most likely due to the dialogue for a particular scene bearing no direct relationship to that scene’s imagery. Rather than generating a different line of dialogue for different images, the machine seemed to want to assign the same line to every image indiscriminately.
Strangely, these repetitive lines tended to say things like I don’t know, I’m not sure what you want, and I don’t know what to do. (One of my faculty advisors, Patrick Hebron, jokingly suggested this may be a sign of metacognition—needless to say, I was slightly creeped out but excited to continue these explorations.)
I tried two other less-than-traditional approaches with NeuralTalk2: training on Reddit image posts and corresponding comments, and training on pictures of recreational drugs and corresponding Erowid experience reports. Both worked better than my X-Files experiment, but neither produced particularly interesting results.
So I resigned myself to training a traditional image captioning model using the Microsoft Common Objects in Context (MSCOCO) caption set. In terms of objects represented, MSCOCO is far from exhaustive, but it does contain over 120,000 images with 5 captions each, which is more than I could’ve expected to produce on my own from any source. Furthermore, I figured I could always do something less traditional with such a model once trained.
I made just one adjustment to Karpathy’s default training parameters: decreased the word-frequency threshold from five to three. By default, NeuralTalk2 ignores any word that appears fewer than five times in the caption corpus it trains on. I guessed that reducing this threshold would result in some extra verbosity in the generated captions, possibly at the expense of accuracy, as a more verbose model might describe details that were not actually present in an image. However, after about five days of training, I had produced a model that exceeded 0.9 CIDEr in tests, which is about as good as Karpathy suggested the model could get in his documentation.
As opposed to NeuralTalk2, which is designed to caption images, Karpathy’s Char-RNN employs a character-level LSTM recurrent neural network simply for generating text. A recurrent neural network is fundamentally a linear pattern machine. Given a character (or set of characters) as a seed, a Char-RNN model will predict which character would come next based on what it has learned from its input corpus. By doing this again and again, the model can generate text in the same manner as a Markov chain, though its internal processes are far more sophisticated.
LSTM stands for Long Short-Term Memory, which remains a popular architecture for recurrent neural networks. Unlike a no-frills vanilla RNN, an LSTM protects its fragile underlying neural net with “gates” that determine which connections will persist in the machine’s weight matrices. (I’ve been told that others are using something called a GRU, but I have yet to investigate this architecture.)
I trained my first text generating LSTM on the same prose corpus I used for word.camera’s literary epitaphs. After about 18 hours, I was getting results like this:
This paragraph struck me as highly poetic, compared to what I’d seen in the past from a computer. The language wasn’t entirely sensical, but it certainly conjured imagery and employed relatively solid grammar. Furthermore, it was original. Originality has always been important to me in computer generated text—because what good is a generator if it just plagiarizes your input corpus? This is a major issue with high order Markov chains, but due to its more sophisticated internal mechanisms, the LSTM didn’t seem to have the same tendency.
Unfortunately, much of the prose-trained model output that contained less poetic language was also less interesting than the passage above. But given that I could produce poetic language with a prose-trained model, I wondered what results I could get from a poetry-trained model.
The output above comes from the first model I trained on poetry. I used the most readily available books I could find, mostly those of poets from the 19th century and earlier whose work had entered the public domain. The consistent line breaks and capitalization schemes were encouraging. But I still wasn’t satisfied with the language—due to the predominant age of the corpus, it seemed too ornate and formal. I wanted more modern-sounding poetic language, and so I knew I had to train a model on modern poetry.
I assembled a corpus of all the modern poetry books I could find online. It wasn’t nearly as easy as assembling the prior corpus—unfortunately, I can’t go into detail on how I got all the books for fear of being sued.
The results were much closer to what I was looking for in terms of language. But they were also inconsistent in quality. At the time, I believed this was because the corpus was too small, so I began to supplement my modern poetry corpus with select prose works to increase its size. It remains likely that this was the case. However, I had not yet discovered the seeding techniques I would later learn can dramatically improve LSTM output.
Another idea occurred to me: I could seed a poetic language LSTM model with a generated image caption to make a new, more poetic version of word.camera. Some of the initial results (see: left) were striking. I showed them to one of my mentors, Allison Parrish, who suggested that I find a way to integrate the caption throughout the poetic text, rather than just at the beginning. (I had showed her some longer examples, where the language had strayed quite far from the subject matter of the caption after a few lines.)
I thought about how to accomplish this, and settled on a technique of seeding the poetic language LSTM multiple times with the same image caption at different temperatures.
Temperature is a parameter, a number between zero and one, that controls the riskiness of a recurrent neural network’s character predictions. A low temperature value will result in text that’s repetitive but highly grammatical. Accordingly, high temperature results will be more innovative and surprising (the model may even invent its own words) while containing more mistakes. By iterating through temperature values with the same seed, the subject matter would remain consistent while the language varied, resulting in longer pieces that seemed more cohesive than anything I’d ever produced with a computer.
As I refined the aforementioned technique, I trained more LSTM models, attempting to discover the best training parameters. The performance of a neural network model is measured by its loss, which drops during training and eventually should be as close to zero as possible. A model’s loss is a statistical measurement indicating how well a model can predict the character sequences in its own corpus. During training, there are two loss figures to monitor: the training loss, which is defined by how well the model predicts the part of the corpus it’s actually training on, and the validation loss, which is defined by how well the model predicts an unknown validation sample that was removed from the corpus prior to training.
The goal of training a model is to reduce its validation loss as much as possible, because we want a model that accurately predicts unknown character sequences, not just those it’s already seen. To this end, there are a number of parameters to adjust, among which are:
The training process largely consists of monitoring the validation loss as it drops across model checkpoints, and monitoring the difference between training loss and validation loss. As Karpathy writes in his Char-RNN documentation:
In January, I released my code on GitHub along with a set of trained neural network models: an image captioning model and two poetic language LSTM models. In my GitHub README, I highlighted a few results I felt were particularly strong [1,2,3,4,5]. Unlike prior versions of word.camera that mostly relied on a strong connection between the image and the output, I found that I could still enjoy the result when the image caption was totally incorrect, and there often seemed to be some other accidental (or perhaps slightly-less-than-accidental) element connecting the image to the words.
I then shifted my focus to developing a new physical prototype. With the prior version of word.camera, I believed one of the most important parts of the experience was its portability. That’s why I developed a mobile web app first, and why I ensured all the physical prototypes I built were fully portable. For the new version, I started with a physical prototype rather than a mobile web application because developing an app initially seemed infeasible due to computational requirements, though I have since thought of some possible solutions.
Since this would be a rapid prototype, I decided to use a very small messenger bag as the case rather than fabricating my own. Also, my research suggested that some of Karpathy’s code may not run on the Raspberry Pi’s ARM architecture, so I needed a slightly larger computer that would require a larger power source.
I decided to use an Intel NUC that I powered with a backup battery for a laptop. I mounted an ELP wide angle camera to the strap, alongside a set of controls (a rotary potentiometer and a button) that communicated with the main computer via an Arduino.
Originally, I planned to dump the text output to a hacked Kindle, but ultimately decided the tactile nature of thermal printer paper would provide for a superior experience (and allow me to hand out the output on the street like I’d done with prior word.camera models). I found a large format thermal printer model with built-in batteries that uses 4"-wide paper (previous printers I’d used had taken paper half as wide), and I was able to pick up a couple of them on eBay for less than $50 each. Based on a suggestion from my friend Anthony Kesich, I decided to add an “ascii image” of the photo above the text.
In February, I was invited to speak at an art and machine learning symposium at Gray Area in San Francisco. In Amsterdam at IDFA in November, I had met Jessica Brillhart, who is a VR director on Google’s Cardboard team. In January, I began to collaborate with her and some other folks at Google on Deep Dream VR experiences with automated poetic voiceover. (If you’re unfamiliar with Deep Dream, check out this blog post from last summer along with the related GitHub repo and Wikipedia article.) We demonstrated these experiences at the event, which was also an auction to sell Deep Dream artwork to benefit the Gray Area Foundation.
Mike Tyka, an artist whose Deep Dream work was prominently featured in the auction, had asked me to use my poetic language LSTM to generate titles for his artwork. I had a lot of fun doing this, and I thought the titles came out well—they even earned a brief mention in the WIRED article about the show.
During my talk the day after the auction, I demonstrated my prototype. I walked onto the stage wearing my messenger bag, snapped a quick photo before I started speaking, and revealed the output at the end.
I would have been more nervous about sharing the machine’s poetic output in front of so many people, but the poetry had already passed what was, in my opinion, a more genuine test of its integrity: a small reading at a library in Brooklyn alongside traditional poets.
Earlier in February, I was invited to share some work at the Leonard Library in Williamsburg. The theme of the evening’s event was love and romance, so I generated several poems [1,2] from images I considered romantic. My reading was met with overwhelming approval from the other poets at the event, one of whom said that the poem I had generated from the iconic Times Square V-J Day kiss photograph by Alfred Eisenstaedt “messed [him] up” as it seemed to contain a plausible description of a flashback from the man’s perspective.
I had been worried because, as I once heard Allison Parrish say, so much commentary about computational creative writing focuses on computers replacing humans—but as anyone who has worked with computers and language knows, that perspective (which Allison summarized as “Now they’re even taking the poet’s job!”) is highly uninformed.
When we teach computers to write, the computers don’t replace us any more than pianos replace pianists—in a certain way, they become our pens, and we become more than writers. We become writers of writers.
Nietzsche, who was the first philosopher to use a typewriter, famously wrote “Our writing tools are also working on our thoughts,” which media theorist Friedrich Kittler analyzes in his book Gramophone, Film, Typewriter (p. 200):
If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense. I’m inclined to believe that such a transformation would be positive, as it would enable us to reach beyond our native writing capacities and produce work that might better reflect our wordless internal thoughts and notions. (I hesitate to repeat the piano/pianist analogy for fear of stomping out its impact, but I think it applies here too.)
In producing fully automated writing machines, I am only attempting to demonstrate what is possible with a machine alone. In my research, I am ultimately striving to produce devices that allow humans to work in concert with machines to produce written work. My ambition is to augment our creativity, not to replace it.
Another ambition of mine is to promote a new framework that I’ve been calling Narrated Reality. We already have Virtual Reality (VR) and Augmented Reality (AR), so it only makes sense to provide another option (NR?)—perhaps one that’s less visual and more about supplementing existing experiences with expressive narration. That way, we can enjoy our experiences while we’re having them, then revisit them later in an augmented format.
For my ITP thesis, I had originally planned to produce one general-purpose device that used photographs, GPS coordinates (supplemented with Foursquare locations), and the time to narrate everyday experiences. However, after receiving some sage advice from Taeyoon Choi, I have decided to split that project into three devices: a camera, a compass, and a clock that respectively use image, location, and time to realize Narrated Reality.
Along with designing and building those devices, I am in the process of training a library of interchangeable LSTM models in order to experience a variety of options with each device in this new space.
After training a number of models on fiction and poetry, I decided to try something different: I trained a model on the Oxford English Dictionary.
The result was better than I ever could have anticipated: an automated Balderdash player that could generate plausible definitions for made up words. I made a Twitter bot so that people could submit their linguistic inventions, and a Tumblr blog for the complete, unabridged definitions.
I was amazed by the machine’s ability to take in and parrot back strings of arbitrary characters it had never seen before, and how it often seemed to understand them in the context of actual words.
The fictional definitions it created for real words were also frequently entertaining. My favorite of these was its definition for “love”—although a prior version of the model had defined love as “past tense of leave,” which I found equally amusing.
One particularly fascinating discovery I made with this bot concerned the importance of a certain seeding technique that Kyle McDonald taught me. As discussed above, when you generate text with a recurrent neural network, you can provide a seed to get the machine started. For example, if you wanted to know the machine’s feelings on the meaning of life, you might seed your LSTM with the following text:
And the machine would logically complete your sentence based on the patterns it had absorbed from its training corpus:
However, to get better and more consistent results, it makes sense to prepend the seed with a pre-seed (another paragraph of text) to push the LSTM into a desired state. In practice, it’s good to use a high quality sample of output from the model you’re seeding with length approximately equal to the sequence length (see above) you set during training.
This means the seed will now look something like this:
And the raw output will look like this (though usually I remove the pre-seed when I present the output):
The difference was more than apparent when I began using this technique with the dictionary model. Without the pre-seed, the bot would usually fail to repeat an unknown word within its generated definition. With the pre-seed, it would reliably parrot back whatever gibberish it had received.
In the end, the Oxford English Dictionary model trained to a significantly lower final validation loss (< 0.75) than any other model I had trained, or have trained since. One commenter on Hacker News noted:
After considering what to do next, I decided to try integrating dictionary definitions into the prose and poetry corpora I had been training before. Additionally, another Stanford PhD student named Justin Johnson released a new and improved version of Karpathy’s Char-RNN, Torch-RNN, which promised to use 7x less memory, which would in turn allow for me to train even larger models than I had been training before on the same GPUs.
It took me an evening to get Torch-RNN working on NYU’s supercomputing cluster, but once I had it running I was immediately able to start training models four times as large as those I’d trained on before. My initial models had 20–25 million parameters, and now I was training with 80–85 million, with some extra room to increase batch size and sequence length parameters.
The results I got from the first model were stunning—the corpus was about 45% poetry, 45% prose, and 10% dictionary definitions, and the output appeared more prose-like while remaining somewhat cohesive and painting vivid imagery.
Next, I decided to train a model on Noam Chomsky’s complete works. Most individuals have not produced enough publicly available text (25–100 MB raw text, or 50–200 novels) to train an LSTM this size. Noam Chomsky is an exception, and the corpus of his writing I was able to assemble weighs in at a hefty 41.2 MB. (This project was complicated by the fact that I worked for Noam Chomsky as an undergraduate at MIT, but that’s a story for another time.) Here is a sample of the output from that model:
Unfortunately, I’ve had trouble making it say anything interesting about language, as it prefers to rattle on and on about the U.S. and Israel and Palestine. Perhaps I’ll have to train the next model on academic papers alone and see what happens.
Most recently, I’ve been training machines on movie screenplays, and getting some interesting results. If you train an LSTM on continuous dialogue, you can ask the model questions and receive plausible responses.
I promised myself I wouldn’t write more than 5000 words for this article, and I’ve already passed that threshold. So, rather than attempting some sort of eloquent conclusion, I’ll leave you with this brief video.
There’s much more to come in the near future. Stay tuned.
Edit 6/9/16: Check out Part II!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
not a poet | new forms & interfaces for written language, narrated reality, &c.
AMI is a program at Google that brings together artists and engineers to realize projects using Machine Intelligence. Works are developed together alongside artists’ current practices and shown at galleries, biennials, festivals, or online.
|
Eric Elliott | 947 | 9 | https://medium.com/javascript-scene/how-to-build-a-neuron-exploring-ai-in-javascript-pt-1-c2726f1f02b2?source=tag_archive---------7---------------- | How to Build a Neuron: Exploring AI in JavaScript Pt 1 | Years ago, I was working on a project that needed to be adaptive. Essentially, the software needed to learn and get better at a frequently repeated task over time.
I’d read about neural networks and some early success people had achieved with them, so I decided to try it out myself. That marked the beginning of a life-long fascination with AI.
AI is a really big deal. There are a small handful of technologies that will dramatically change the world over the course of the next 25 years. Three of the biggest disruptors rely deeply on AI:
Self driving cars alone will disrupt more than 10 million jobs in America, radically improve transportation and shipping efficiency, and may lead to a huge change in car ownership as we outsource transportation and the pains of car ownership and maintenance to apps like Uber.
You’ve probably heard about Google’s self driving cars, but Tesla, Mercedes, BMW and other car manufacturers are also making big bets on self driving technology.
Regulations, not technology, are the primary obstacles for drone-based commercial services such as Amazon air, and just a few days ago, the FAA relaxed restrictions on commercial drone flights. It’s still not legal for Amazon to deliver packages to your door with drones, but that will soon change, and when that happens, commerce will never be the same.
Of course half a million consumer drone sales over the last holiday season implies that drones are going to change a lot more than commerce. Expect to see a lot more of them hovering obnoxiously in every metro area in the world in the coming years.
Augmented and virtual reality will fundamentally transform what it means to be human. As our senses are augmented by virtual constructs mixed seamlessly with the real world, we’ll find new ways to work, new ways to play, and new ways to interact with each other, including AR assisted learning, telepresence, and radical new experiences we haven’t dreamed of, yet.
All of these technologies require our gadgets to have an awareness of the surrounding environment, and the ability to respond behaviorally to environmental inputs. Self driving cars need to see obstacles and make corrections to avoid them. Drones need to detect collision hazards, wind, and the ground to land on. Room scale VR needs to alert you of the room boundaries so you don’t wander into walls, and AR devices need to detect tables, chairs, and desks, and walls, and allow virtual elements and characters to interact with them.
Processing sensory inputs and figuring out what they mean is one of the most important jobs that our brain is responsible for.
How does the human brain deal with the complexity of that job? With neurons.
Taken alone, a single neuron doesn’t do anything particularly interesting, but when combined together, neural networks are responsible for our ability to recognize the world around us, solve problems, and interact with our environment and the people around us.
Neural networks are the mechanism that allows us to use language, build tools, catch balls, type, read this article, remember things, and basically do all the things we consider to be “thinking”.
Recently, scientists have been scanning sections of small animal brains on the road to whole brain emulation. For example, a molecular-level model of the 302 neurons in the C. elegans roundworm.
The blue brain project is an attempt to do the same thing with a human brain. The research uses microscopes to scan slices of living human brain tissue. It’s an ambitious project that is still in its infancy a decade after it launched, but nobody expects it to be finished tomorrow.
We are still a long way from whole brain emulation for anything but the simplest organisms, but eventually, we may be able to emulate a whole human brain on a computer at the molecular level.
Before we try to emulate even basic neuron functionality ourselves, we should learn more about how neurons work.
A neuron is a cell that collects input signals (electrical potentials) from synaptic terminals (typically from dendrites, but sometimes directly on the cell membrane). When those signals sum past a certain threshold potential at the axon hillock trigger zone, it triggers an output signal, called an action potential.
The action potential travels along the output nerve fiber, called an axon. The axon splits into collateral branches which can carry the output signal to different parts of the neural network. Each axon branch terminates by splitting into clusters of tiny terminal branches, which interface with other neurons through synapses.
Synapse is the word used to describe the transmission mechanism from one neuron to the next.
There are two kinds of synapse receptors on the postsynaptic terminal wall: ion channels and metabolic channels.
Ion channels are fast (tens of milliseconds), and can either excite or inhibit the potential in the postsynaptic neuron, by opening channels for positively or negatively charged ions to enter the cell, respectively.
In an ionotropic transmission, the neurotransmitter is released from the presynaptic neuron into the synaptic cleft — a tiny gap between the terminals of the presynaptic neuron and the postsynaptic neuron. It binds to receptors on the postsynaptic terminal wall, which causes them to open, allowing electrically charged ions to flow into the postsynaptic cell, causing a change to the cell’s potential.
Metabolic channels are slower and more controlled than ion channels. In chemical transmissions, the action potential triggers the release of chemical transmitters from the presynaptic terminal into the synaptic cleft.
Those chemical transmitters bind to metabolic receptors which do not have ion channels of their own. That binding triggers chemical reactions on the inside of the cell wall to release G-proteins which can open ion channels connected to different receptors. As the G-proteins must first diffuse and rebind to neighboring channels, this process naturally takes longer.
The duration of metabolic effect can vary from about 100ms to several minutes, depending on how long it takes for neurotransmitters to be absorbed, released, diffused, or recycled back into the presynaptic terminal.
Like ion channels, the signal can be either exciting or inhibitory to the postsynaptic neuron potential.
There is also another type of synapse, called an electrical synapse. Unlike the chemical synapses described above, which rely on chemical neurotransmitters and receptors at axon terminals, an electrical synapse connects dendrites from one cell directly to dendrites of another cell by a gap junction, which is a channel that allows ions and other small molecules to pass directly between the cells, effectively creating one large neuron with multiple axons.
Cells connected by electrical synapses almost always fire simultaneously. When any connected cell fires, all connected cells fire with it. However, some gap junctions are one way.
Among other things, electrical synapses connect cells that control muscle groups such as the heart, where it’s important that all related cells cooperate, creating simultaneous muscle contractions.
Different synapses can have different strengths (called weights). A synapse weight can change over time through a process known as synaptic plasticity.
It is believed that changes in synapse connection strength is how we form memory. In other words, in order to learn and form memories, our brain literally rewires itself.
An increase in synaptic weight is called Long Term Potentiation (LTP).
A decrease in synaptic weight is called Long Term Depression (LTD).
If the postsynaptic neuron tends to fire a lot when the presynaptic neuron fires, the synaptic weight increases. If the cells don’t tend to fire together often, the connection weakens. In other words:
The key to synaptic plasticity is hidden in a pair of 20ms windows:
If the presynaptic neuron fires before the postsynaptic neuron within 20ms, the weight increases (LTP).
If the presynaptic neuron fires after the postsynaptic neuron within 20ms, the weight decreases (LTD).
This process is called spike-timing-dependent plasticity.
Spike-timing-dependent plasticity was discovered in the 1990’s and is still being explored, but it is believed that action potential backpropagation from the cell’s axon to the dendrites is involved in the LTP process.
During a typical forward-propagating event, glutamate will be released from the presynaptic terminal, which binds to AMPA receptors in the postsynaptic terminal wall, allowing positively charged sodium ions (Na+) into the cell.
If a large enough depolarization event occurs inside the cell (perhaps a backpropagation potential from the axon trigger point), electrostatic repulsion will open a magnesium block in NMDA receptors, allowing even more sodium to flood the cell along with calcium (Ca2+). At the same time, potassium (K+) flows out of the cell. These events themselves only last tens of milliseconds, but they have indirect lasting effects.
An influx of calcium causes extra AMPA receptors to be inserted into the cell membrane, which will allow more sodium ions into the cell during future action potential events from the presynaptic neuron.
A similar process works in reverse to trigger LTD.
During LTP events, a special class of proteins called growth factors can also form, which can cause new synapses to grow, strengthening the bond between the two cells. The impact of new synapse growth can be permanent, assuming that the neurons continue to fire together frequently.
Many artificial neurons act less like neurons and more like transistors with two simple states: on or off. If enough upstream neurons are on rather than off, the neuron is on. Otherwise, it’s off. Other neural nets use input values from -1 to +1. The basic math looks a little like the following:
This is a good idea if you want to conserve CPU power so you can emulate a lot more neurons, and we’ve been able to use these basic principles to accomplish very simple pattern recognition tasks, such as optical character recognition (OCR) using pre-trained networks. However, there’s a problem.
As I’ve described above, real neurons don’t behave that way. Instead, synapses transmit fluctuating continuous value potentials over time through the soma (cell body) to the axon hillock trigger zone where the sum of the signal may or may not trigger an action potential at any given moment in time. If the potential in the soma remains high, pulses may continue as the cell triggers at high frequency (once every few milliseconds).
Lots of variables influence the process, the trigger frequencies, and the pattern of action potential bursts. With the model presented above, how would you determine whether or not triggers occurred within the LTP/LTD windows?
What critical element is our basic model missing? Time.
But that’s a story for a different article. Stay tuned for part 2.
Eric Elliott is the author of “Programming JavaScript Applications” (O’Reilly), and “Learn JavaScript with Eric Elliott”. He has contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He spends most of his time in the San Francisco Bay Area with the most beautiful woman in the world.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Make some magic. #JavaScript
To submit, DM your proposal to @JS_Cheerleader on Twitter
|
Dhruv Parthasarathy | 665 | 11 | https://medium.com/@dhruvp/how-to-write-a-neural-network-to-play-pong-from-scratch-956b57d4f6e0?source=tag_archive---------8---------------- | Write an AI to win at Pong from scratch with Reinforcement Learning | There’s a huge difference between reading about Reinforcement Learning and actually implementing it.
In this post, you’ll implement a Neural Network for Reinforcement Learning and see it learn more and more as it finally becomes good enough to beat the computer in Pong! You can play around with other such Atari games at the OpenAI Gym.
By the end of this post, you’ll be able to do the following:
The code and the idea are all tightly based on Andrej Karpathy’s blog post. The code in me_pong.py is intended to be a simpler to follow version of pong.py which was written by Dr. Karpathy.
To follow along, you’ll need to know the following:
If you want a deeper dive into the material at hand, read the blog post on which all of this is based. This post is meant to be a simpler introduction to that material.
Great! Let’s get started.
We are given the following:
Can we use these pieces to train our agent to beat the computer? Moreover, can we make our solution generic enough so it can be reused to win in games that aren’t pong?
Indeed, we can! Andrej does this by building a Neural Network that takes in each image and outputs a command to our AI to move up or down.
We can break this down a bit more into the following steps:
Our Neural Network, based heavily on Andrej’s solution, will do the following:
Ok now that we’ve described the problem and its solution, let’s get to writing some code!
We’re now going to follow the code in me_pong.py. Please keep it open and read along! The code starts here:
First, let’s use OpenAI Gym to make a game environment and get our very first image of the game.
Next, we set a bunch of parameters based off of Andrej’s blog post. We aren’t going to worry about tuning them but note that you can probably get better performance by doing so. The parameters we will use are:
Then, we set counters, initial values, and the initial weights in our Neural Network.
Weights are stored in matrices. Layer 1 of our Neural Network is a 200 x 6400 matrix representing the weights for our hidden layer. For layer 1, element w1_ij represents the weight of neuron i for input pixel j in layer 1.
Layer 2 is a 200 x 1 matrix representing the weights of the output of the hidden layer on our final output. For layer 2, element w2_i represents the weights we place on the activation of neuron i in the hidden layer.
We initialize each layer’s weights with random numbers for now. We divide by the square root of the number of the dimension size to normalize our weights.
Next, we set up the initial parameters for RMSProp (a method for updating weights that we will discuss later). Don’t worry too much about understanding what you see below. I’m mainly bringing it up here so we can continue to follow along the main code block.
We’ll need to collect a bunch of observations and intermediate values across the episode and use those to compute the gradient at the end based on the result. The below sets up the arrays where we’ll collect all that information.
Ok we’re all done with the setup! If you were following, it should look something like this:
Phew. Now for the fun part!
The crux of our algorithm is going to live in a loop where we continually make a move and then learn based on the results of the move. We’ll put everything in a while block for now but in reality you might set up a break condition to stop the process.
The first step to our algorithm is processing the image of the game that OpenAI Gym passed us. We really don’t care about the entire image - just certain details. We do this below:
Let’s dive into preprocess_observations to see how we convert the image OpenAI Gym gives us into something we can use to train our Neural Network. The basic steps are:
Now that we’ve preprocessed the observations, let’s move on to actually sending the observations through our neural net to generate the probability of telling our AI to move up. Here are the steps we’ll take:
How exactly does apply_neural_nets take observations and weights and generate a probability of going up? This is just the forward pass of the Neural Network. Let’s look at the code below for more information:
As you can see, it’s not many steps at all! Let’s go step by step:
Let’s return to the main algorithm and continue on. Now that we have obtained a probability of going up, we need to now record the results for later learning and choose an action to tell our AI to implement:
We choose an action by flipping an imaginary coin that lands “up” with probability up_probability and down with 1 - up_probability. If it lands up, we choose tell our AI to go up and if not, we tell it to go down. We also
Having done that, we pass the action to OpenAI Gym via env.step(action).
Ok we’ve covered the first half of the solution! We know what action to tell our AI to take. If you’ve been following along, your code should look like this:
Now that we’ve made our move, it’s time to start learning so we figure out the right weights in our Neural Network!
Learning is all about seeing the result of the action (i.e. whether or not we won the round) and changing our weights accordingly. The first step to learning is asking the following question:
Mathematically, this is just the derivative of our result with respect to the outputs of our final layer. If L is the value of our result to us and f is the function that gives us the activations of our final layer, this derivative is just ∂L/∂f.
In a binary classification context (i.e. we just have to tell the AI one of two actions, up or down), this derivative turns out to be
Note that σ in the above equation represents the sigmoid function. Read the Attribute Classification section here for more information about how we get the above derivative. We simplify this further below:
After one action(moving the paddle up or down), we don’t really have an idea of whether or not this was the right action. So we’re going to cheat and treat the action we end up sampling from our probability as the correct action.
Our predicion for this round is going to be the probability of going up we calculated. Using that, we have that ∂L/∂f can be computed by
Awesome! We have the gradient per action.
The next step is to figure out how we learn after the end of an episode (i.e. when we or our opponent miss the ball and someone gets a point). We do this by computing the policy gradient of the network at the end of each episode. The intuition here is that if we won the round, we’d like our network to generate more of the actions that led to us winning. Alternatively, if we lose, we’re going to try and generate less of these actions.
OpenAI Gym provides us the handy done variable to tell us when an episode finishes (i.e. we missed the ball or our opponent missed the ball). When we notice we are done, the first thing we do is compile all our observations and gradient calculations for the episode. This allows us to apply our learnings over all the actions in the episode.
Next, we want to learn in such a way that actions taken towards the end of an episode more heavily influence our learning than actions taken at the beginning. This is called discounting.
Think about it this way - if you moved up at the first frame of the episode, it probably had very little impact on whether or not you win. However, closer to the end of the episode, your actions probably have a much larger effect as they determine whether or not your paddle reaches the ball and how your paddle hits the ball.
We’re going to take this weighting into account by discounting our rewards such that rewards from earlier frames are discounted a lot more than rewards for later frames. After this, we’re going to finally use backpropagation to compute the gradient (i.e. the direction we need to move our weights to improve).
Let’s dig in a bit into how the policy gradient for the episode is computed. This is one of the most important parts of Reinforcement Learning as it’s how our agent figures out how to improve over time.
To begin with, if you haven’t already, read this excerpt on backpropagation from Michael Nielsen’s excellent free book on Deep Learning.
As you’ll see in that excerpt, there are four fundamental equations of backpropogation, a technique for computing the gradient for our weights.
Our goal is to find ∂C/∂w1 (BP4), the derivative of the cost function with respect to the first layer’s weights, and ∂C/∂w2, the derivative of the cost function with respect to the second layer’s weights. These gradients will help us understand what direction to move our weights in for the greatest improvement.
To begin with, let’s start with ∂C/∂w2. If a^l2 is the activations of the hidden layer (layer 2), we see that the formula is:
Indeed, this is exactly what we do here:
Next, we need to calculate ∂C/∂w1. The formula for that is:
and we also know that a^l1 is just our observation_values.
So all we need now is δ^l2. Once we have that, we can calculate ∂C/∂w1 and return. We do just that below:
If you’ve been following along, your function should look like this:
With that, we’ve finished backpropagation and computed our gradients!
After we have finished batch_size episodes, we finally update our weights for our Neural Network and implement our learnings.
To update the weights, we simply apply RMSProp, an algorithm for updating weights described by Sebastian Reuder here.
We implement this below:
This is the step that tweaks our weights and allows us to get better over time.
This is basically it! Putting it altogether it should look like this.
You just coded a full Neural Network for playing Pong! Uncomment env.render() and run it for 3–4 days to see it finally beat the computer! You’ll need to do some pickling as done in Andrej Karpathy’s solution to be able to visualize your results when you win.
According to the blog post, this algorithm should take around 3 days of training on a Macbook to start beating the computer.
Consider tweaking the parameters or using Convolutional Neural Nets to boost the performance further.
If you want a further primer into Neural Networks and Reinforcement Learning, there are some great resources to learn more (I work at Udacity as the Director of Machine Learning programs):
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
@dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity.
|
Waleed Abdulla | 507 | 12 | https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6?source=tag_archive---------9---------------- | Traffic Sign Recognition with TensorFlow – Waleed Abdulla – Medium | This is part 1 of a series about building a deep learning model to recognize traffic signs. It’s intended to be a learning experience, for myself and for anyone else who likes to follow along. There are a lot of resources that cover the theory and math of neural networks, so I’ll focus on the practical aspects instead. I’ll describe my own experience building this model and share the source code and relevant materials. This is suitable for those who know Python and the basics of machine learning already, but want hands on experience and to practice building a real application.
In this part, I’ll talk about image classification and I’ll keep the model as simple as possible. In later parts, I’ll cover convolutional networks, data augmentation, and object detection.
The source code is available in this Jupyter notebook. I’m using Python 3.5 and TensorFlow 0.12. If you prefer to run the code in Docker, you can use my Docker image that contains many popular deep learning tools. Run it with this command:
Note that my project directory is in ~/traffic and I’m mapping it to the /traffic directory in the Docker container. Modify this if you’re using a different directory.
My first challenge was finding a good training dataset. Traffic sign recognition is a well studied problem, so I figured I’ll find something online.
I started by googling “traffic sign dataset” and found several options. I picked the Belgian Traffic Sign Dataset because it was big enough to train on, and yet small enough to be easy to work with.
You can download the dataset from http://btsd.ethz.ch/shareddata/. There are a lot of datasets on that page, but you only need the two files listed under BelgiumTS for Classification (cropped images):
After expanding the files, this is my directory structure. Try to match it so you can run the code without having to change the paths:
Each of the two directories contain 62 subdirectories, named sequentially from 00000 to 00061. The directory names represent the labels, and the images inside each directory are samples of each label.
Or, if you prefer to sound more formal: do Exploratory Data Analysis. It’s tempting to skip this part, but I’ve found that the code I write to examine the data ends up being used a lot throughout the project. I usually do this in Jupyter notebooks and share them with the team. Knowing your data well from the start saves you a lot of time later.
The images in this dataset are in an old .ppm format. So old, in fact, that most tools don’t support it. Which meant that I couldn’t casually browse the folders to take a look at the images. Luckily, the Scikit Image library recognizes this format. This code will load the data and return two lists: images and labels.
This is a small dataset so I’m loading everything into RAM to keep it simple. For larger datasets, you’d want to load the data in batches.
After loading the images into Numpy arrays, I display a sample image of each label. See code in the notebook. This is our dataset:
Looks like a good training set. The image quality is great, and there are a variety of angles and lighting conditions. More importantly, the traffic signs occupy most of the area of each image, which allows me to focus on object classification and not have to worry about finding the location of the traffic sign in the image (object detection). I’ll get to object detection in a future post.
The first thing I noticed from the samples above is that images are square-ish, but have different aspect ratios. My neural network will take a fixed-size input, so I have some preprocessing to do. I’ll get to that soon, but first let’s pick one label and see more of its images. Here is an example of label 32:
It looks like the dataset considers all speed limit signs to be of the same class, regardless of the numbers on them. That’s fine, as long as we know about it beforehand and know what to expect. That’s why understanding your dataset is so important and can save you a lot of pain and confusion later.
I’ll leave exploring the other labels to you. Labels 26 and 27 are interesting to check. They also have numbers in red circles, so the model will have to get really good to differentiate between them.
Most image classification networks expect images of a fixed size, and our first model will do as well. So we need to resize all the images to the same size.
But since the images have different aspect ratios, then some of them will be stretched vertically or horizontally. Is that a problem? I think it’s not in this case, because the differences in aspect ratios are not that large. My own criteria is that if a person can recognize the images when they’re stretched then the model should be able to do so as well.
What are the sizes of the images anyway? Let’s print a few examples:
The sizes seem to hover around 128x128. I could use that size to preserve as much information as possible, but in early development I prefer to use a smaller size because it leads to faster training, which allows me to iterate faster. I experimented with 16x16 and 20x20, but they were too small. I ended up picking 32x32 which is easy to recognize (see below) and reduces the size of the model and training data by a factor of 16 compared to 128x128.
I’m also in the habit of printing the min() and max() values often. It’s a simple way to verify the range of the data and catch bugs early. This tells me that the image colors are the standard range of 0–255.
We’re getting to the interesting part! Continuing the theme of keeping it simple, I started with the simplest possible model: A one layer network that consists of one neuron per label.
This network has 62 neurons and each neuron takes the RGB values of all pixels as input. Effectively, each neuron receives 32*32*3=3072 inputs. This is a fully-connected layer because every neuron connects to every input value. You’re probably familiar with its equation:
I start with a simple model because it’s easy to explain, easy to debug, and fast to train. Once this works end to end, expanding on it is much easier than building something complex from the start.
TensorFlow encapsulates the architecture of a neural network in an execution graph. The graph consists of operations (Ops for short) such as Add, Multiply, Reshape, ...etc. These ops perform actions on data in tensors (multidimensional arrays).
I’ll go through the code to build the graph step by step below, but here is the full code if you prefer to scan it first:
First, I create the Graph object. TensorFlow has a default global graph, but I don’t recommend using it. Global variables are bad in general because they make it too easy to introduce bugs. I prefer to create the graph explicitly.
Then I define Placeholders for the images and labels. The placeholders are TensorFlow’s way of receiving input from the main program. Notice that I create the placeholders (and all other ops) inside the block of with graph.as_default(). This is so they become part of my graph object rather than the global graph.
The shape of the images_ph placeholder is [None, 32, 32, 3]. It stands for [batch size, height, width, channels] (often shortened as NHWC) . The None for batch size means that the batch size is flexible, which means that we can feed different batch sizes to the model without having to change the code. Pay attention to the order of your inputs because some models and frameworks might use a different arrangement, such as NCHW.
Next, I define the fully connected layer. Rather than implementing the raw equation, y = xW + b, I use a handy function that does that in one line and also applies the activation function. It expects input as a one-dimensional vector, though. So I flatten the images first.
I’m using the ReLU activation function here:
It simply converts all negative values to zeros. It’s been shown to work well in classification tasks and trains faster than sigmoid or tanh. For more background, check here and here.
The output of the fully connected layer is a logits vector of length 62 (technically, it’s [None, 62] because we’re dealing with a batch of logits vectors).
A row in the logits tensor might look like this: [0.3, 0, 0, 1.2, 2.1, .01, 0.4, ....., 0, 0]. The higher the value, the more likely that the image represents that label. Logits are not probabilities, though — They can have any value, and they don’t add up to 1. The actual absolute values of the logits are not important, just their values relative to each other. It’s easy to convert logits to probabilities using the softmax function if needed (it’s not needed here).
In this application, we just need the index of the largest value, which corresponds to the id of the label. The argmax op does that.
The argmax output will be integers in the range 0 to 61.
Choosing the right loss function is an area of research in and of itself, which I won’t delve into it here other than to say that cross-entropy is the most common function for classification tasks. If you’re not familiar with it, there is a really good explanation here and here.
Cross-entropy is a measure of difference between two vectors of probabilities. So we need to convert labels and the logits to probability vectors. The function sparse_softmax_cross_entropy_with_logits() simplifies that. It takes the generated logits and the groundtruth labels and does three things: converts the label indexes of shape [None] to logits of shape [None, 62] (one-hot vectors), then it runs softmax to convert both prediction logits and label logits to probabilities, and finally calculates the cross-entropy between the two. This generates a loss vector of shape [None] (1D of length = batch size), which we pass through reduce_mean() to get one single number that represents the loss value.
Choosing the optimization algorithm is another decision to make. I usually use the ADAM optimizer because it’s been shown to converge faster than simple gradient descent. This post does a great job comparing different gradient descent optimizers.
The last node in the graph is the initialization op, which simply sets the values of all variables to zeros (or to random values or whatever the variables are set to initialize to).
Notice that the code above doesn’t execute any of the ops yet. It’s just building the graph and describing its inputs. The variables we defined above, such as init, loss, predicted_labels don’t contain numerical values. They are references to ops that we’ll execute next.
This is where we iteratively train the model to minimize the loss function. Before we start training, though, we need to create a Session object.
I mentioned the Graph object earlier and how it holds all the Ops of the model. The Session, on the other hand, holds the values of all the variables. If a graph holds the equation y=xW+b then the session holds the actual values of these variables.
Usually the first thing to run after starting a session is the initialization op, init, to initialize the variables.
Then we start the training loop and run the train op repeatedly. While not necessary, it’s useful to run the loss op as well to print its values and monitor the progress of the training.
In case you’re wondering, I set the loop to 201 so that the i % 10 condition is satisfied in the last round and prints the last loss value. The output should look something like this:
Now we have a trained model in memory in the Session object. To use it, we call session.run() just like in the training code. The predicted_labels op returns the output of the argmax() function, so that’s what we need to run. Here I classify 10 random images and print both, the predictions and the groundtruth labels for comparison.
In the notebook, I include a function to visualize the results as well. It generates something like this:
The visualization shows that the model is working , but doesn’t quantify how accurate it is. And you might’ve noticed that it’s classifying the training images, so we don’t know yet if the model generalizes to images that it hasn’t seen before. Next, we calculate a better evaluation metric.
To properly measure how the model generalizes to data it hasn’t seen, I do the evaluation on test data that I didn’t use in training. The BelgiumTS dataset makes this easy by providing two separate sets, one for training and one for testing.
In the notebook I load the test set, resize the images to 32x32, and then calculate the accuracy. This is the relevant part of the code that calculates the accuracy.
The accuracy I get in each run ranges between 0.40 and 0.70 depending on whether the model lands on a local minimum or a global minimum. This is expected when running a simple model like this one. In a future post I’ll talk about ways to improve the consistency of the results.
Congratulations! We have a working simple neural network. Given how simple this neural network is, training takes just a minute on my laptop so I didn’t bother saving the trained model. In the next part, I’ll add code to save and load trained models and expand to use multiple layers, convolutional networks, and data augmentation. Stay tuned!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Startups, deep learning, computer vision.
|
Stefan Kojouharov | 14.2K | 7 | https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463?source=tag_archive---------0---------------- | Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data | Over the past few months, I have been collecting AI cheat sheets. From time to time I share them with friends and colleagues and recently I have been getting asked a lot, so I decided to organize and share the entire collection. To make things more interesting and give context, I added descriptions and/or excerpts for each major topic.
This is the most complete list and the Big-O is at the very end, enjoy...
This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it.
Scikit-learn (formerly scikits.learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
In May 2017 Google announced the second-generation of the TPU, as well as the availability of the TPUs in Google Compute Engine.[12] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs provide up to 11.5 petaflops.
In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than an end-to-end machine-learning framework. It presents a higher-level, more intuitive set of abstractions that make it easy to configure neural networks regardless of the backend scientific computing library.
NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy address the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy.
The name ‘Pandas’ is derived from the term “panel data”, an econometrics term for multidimensional structured data sets.
The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”.
SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.[3]
matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.[2] SciPy makes use of matplotlib.
pyplot is a matplotlib module which provides a MATLAB-like interface.[6] matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free.
>>> If you like this list, you can let me know here. <<<
Stefan is the founder of Chatbot’s Life, a Chatbot media and consulting firm. Chatbot’s Life has grown to over 150k views per month and has become the premium place to learn about Bots & AI online. Chatbot’s Life has also consulted many of the top Bot companies like Swelly, Instavest, OutBrain, NearGroup and a number of Enterprises.
Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/
Bokeh Cheat Sheet: https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Bokeh_Cheat_Sheet.pdf
Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics
Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf
Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling
Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf
Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs
Keras: https://en.wikipedia.org/wiki/Keras
Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/
Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet
ML Cheat Sheet:: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html
Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY
Matpotlib: https://en.wikipedia.org/wiki/Matplotlib
Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/
Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/
Neural Networks: https://www.quora.com/Where-can-find-a-cheat-sheet-for-neural-network
Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE
NumPy: https://en.wikipedia.org/wiki/NumPy
Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM
Pandas: https://en.wikipedia.org/wiki/Pandas_(software)
Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc
Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ
Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet
Scikit-learn: https://en.wikipedia.org/wiki/Scikit-learn
Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html
Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI
SciPy: https://en.wikipedia.org/wiki/SciPy
TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html
Tensor Flow: https://en.wikipedia.org/wiki/TensorFlow
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Founder of Chatbots Life. I help Companies Create Great Chatbots & AI Systems and share my Insights along the way.
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
|
Avinash Sharma V | 6.9K | 10 | https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0?source=tag_archive---------1---------------- | Understanding Activation Functions in Neural Networks | Recently, a colleague of mine asked me a few questions like “why do we have so many activation functions?”, “why is that one works better than the other?”, ”how do we know which one to use?”, “is it hardcore maths?” and so on. So I thought, why not write an article on it for those who are familiar with neural network only at a basic level and is therefore, wondering about activation functions and their “why-how-mathematics!”.
NOTE: This article assumes that you have a basic knowledge of an artificial “neuron”. I would recommend reading up on the basics of neural networks before reading this article for better understanding.
So what does an artificial neuron do? Simply put, it calculates a “weighted sum” of its input, adds a bias and then decides whether it should be “fired” or not ( yeah right, an activation function does this, but let’s go with the flow for a moment ).
So consider a neuron.
Now, the value of Y can be anything ranging from -inf to +inf. The neuron really doesn’t know the bounds of the value. So how do we decide whether the neuron should fire or not ( why this firing pattern? Because we learnt it from biology that’s the way brain works and brain is a working testimony of an awesome and intelligent system ).
We decided to add “activation functions” for this purpose. To check the Y value produced by a neuron and decide whether outside connections should consider this neuron as “fired” or not. Or rather let’s say — “activated” or not.
The first thing that comes to our minds is how about a threshold based activation function? If the value of Y is above a certain value, declare it activated. If it’s less than the threshold, then say it’s not. Hmm great. This could work!
Activation function A = “activated” if Y > threshold else not
Alternatively, A = 1 if y> threshold, 0 otherwise
Well, what we just did is a “step function”, see the below figure.
Its output is 1 ( activated) when value > 0 (threshold) and outputs a 0 ( not activated) otherwise.
Great. So this makes an activation function for a neuron. No confusions. However, there are certain drawbacks with this. To understand it better, think about the following.
Suppose you are creating a binary classifier. Something which should say a “yes” or “no” ( activate or not activate ). A Step function could do that for you! That’s exactly what it does, say a 1 or 0. Now, think about the use case where you would want multiple such neurons to be connected to bring in more classes. Class1, class2, class3 etc. What will happen if more than 1 neuron is “activated”. All neurons will output a 1 ( from step function). Now what would you decide? Which class is it? Hmm hard, complicated.
You would want the network to activate only 1 neuron and others should be 0 ( only then would you be able to say it classified properly/identified the class ). Ah! This is harder to train and converge this way. It would have been better if the activation was not binary and it instead would say “50% activated” or “20% activated” and so on. And then if more than 1 neuron activates, you could find which neuron has the “highest activation” and so on ( better than max, a softmax, but let’s leave that for now ).
In this case as well, if more than 1 neuron says “100% activated”, the problem still persists.I know! But..since there are intermediate activation values for the output, learning can be smoother and easier ( less wiggly ) and chances of more than 1 neuron being 100% activated is lesser when compared to step function while training ( also depending on what you are training and the data ).
Ok, so we want something to give us intermediate ( analog ) activation values rather than saying “activated” or not ( binary ).
The first thing that comes to our minds would be Linear function.
A = cx
A straight line function where activation is proportional to input ( which is the weighted sum from neuron ).
This way, it gives a range of activations, so it is not binary activation. We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that. So that is ok too. Then what is the problem with this?
If you are familiar with gradient descent for training, you would notice that for this function, derivative is a constant.
A = cx, derivative with respect to x is c. That means, the gradient has no relationship with X. It is a constant gradient and the descent is going to be on constant gradient. If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !!!
This is not that good! ( not always, but bear with me ). There is another problem too. Think about connected layers. Each layer is activated by a linear function. That activation in turn goes into the next level as input and the second layer calculates weighted sum on that input and it in turn, fires based on another linear activation function.
No matter how many layers we have, if all are linear in nature, the final activation function of last layer is nothing but just a linear function of the input of first layer! Pause for a bit and think about it.
That means these two layers ( or N layers ) can be replaced by a single layer. Ah! We just lost the ability of stacking layers this way. No matter how we stack, the whole network is still equivalent to a single layer with linear activation ( a combination of linear functions in a linear manner is still another linear function ).
Let’s move on, shall we?
Well, this looks smooth and “step function like”. What are the benefits of this? Think about it for a moment. First things first, it is nonlinear in nature. Combinations of this function are also nonlinear! Great. Now we can stack layers. What about non binary activations? Yes, that too!. It will give an analog activation unlike step function. It has a smooth gradient too.
And if you notice, between X values -2 to 2, Y values are very steep. Which means, any small changes in the values of X in that region will cause values of Y to change significantly. Ah, that means this function has a tendency to bring the Y values to either end of the curve.
Looks like it’s good for a classifier considering its property? Yes ! It indeed is. It tends to bring the activations to either side of the curve ( above x = 2 and below x = -2 for example). Making clear distinctions on prediction.
Another advantage of this activation function is, unlike linear function, the output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function. So we have our activations bound in a range. Nice, it won’t blow up the activations then.
This is great. Sigmoid functions are one of the most widely used activation functions today. Then what are the problems with this?
If you notice, towards either end of the sigmoid function, the Y values tend to respond very less to changes in X. What does that mean? The gradient at that region is going to be small. It gives rise to a problem of “vanishing gradients”. Hmm. So what happens when the activations reach near the “near-horizontal” part of the curve on either sides?
Gradient is small or has vanished ( cannot make significant change because of the extremely small value ). The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ). There are ways to work around this problem and sigmoid is still very popular in classification problems.
Another activation function that is used is the tanh function.
Hm. This looks very similar to sigmoid. In fact, it is a scaled sigmoid function!
Ok, now this has characteristics similar to sigmoid that we discussed above. It is nonlinear in nature, so great we can stack layers! It is bound to range (-1, 1) so no worries of activations blowing up. One point to mention is that the gradient is stronger for tanh than sigmoid ( derivatives are steeper). Deciding between the sigmoid or tanh will depend on your requirement of gradient strength. Like sigmoid, tanh also has the vanishing gradient problem.
Tanh is also a very popular and widely used activation function.
Later, comes the ReLu function,
A(x) = max(0,x)
The ReLu function is as shown above. It gives an output x if x is positive and 0 otherwise.
At first look this would look like having the same problems of linear function, as it is linear in positive axis. First of all, ReLu is nonlinear in nature. And combinations of ReLu are also non linear! ( in fact it is a good approximator. Any function can be approximated with combinations of ReLu). Great, so this means we can stack layers. It is not bound though. The range of ReLu is [0, inf). This means it can blow up the activation.
Another point that I would like to discuss here is the sparsity of the activation. Imagine a big neural network with a lot of neurons. Using a sigmoid or tanh will cause almost all neurons to fire in an analog way ( remember? ). That means almost all activations will be processed to describe the output of a network. In other words the activation is dense. This is costly. We would ideally want a few neurons in the network to not activate and thereby making the activations sparse and efficient.
ReLu give us this benefit. Imagine a network with random initialized weights ( or normalised ) and almost 50% of the network yields 0 activation because of the characteristic of ReLu ( output 0 for negative values of x ). This means a fewer neurons are firing ( sparse activation ) and the network is lighter. Woah, nice! ReLu seems to be awesome! Yes it is, but nothing is flawless.. Not even ReLu.
Because of the horizontal line in ReLu( for negative X ), the gradient can go towards 0. For activations in that region of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input ( simply because gradient is 0, nothing changes ). This is called dying ReLu problem. This problem can cause several neurons to just die and not respond making a substantial part of the network passive. There are variations in ReLu to mitigate this issue by simply making the horizontal line into non-horizontal component . for example y = 0.01x for x<0 will make it a slightly inclined line rather than horizontal line. This is leaky ReLu. There are other variations too. The main idea is to let the gradient be non zero and recover during training eventually.
ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. That is a good point to consider when we are designing deep neural nets.
Now, which activation functions to use. Does that mean we just use ReLu for everything we do? Or sigmoid or tanh? Well, yes and no. When you know the function you are trying to approximate has certain characteristics, you can choose an activation function which will approximate the function faster leading to faster training process. For example, a sigmoid works well for a classifier ( see the graph of sigmoid, doesn’t it show the properties of an ideal classifier? ) because approximating a classifier function as combinations of sigmoid is easier than maybe ReLu, for example. Which will lead to faster training process and convergence. You can use your own custom functions too!. If you don’t know the nature of the function you are trying to learn, then maybe i would suggest start with ReLu, and then work backwards. ReLu works most of the time as a general approximator!
In this article, I tried to describe a few activation functions used commonly. There are other activation functions too, but the general idea remains the same. Research for better activation functions is still ongoing. Hope you got the idea behind activation function, why they are used and how do we decide which one to use.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Musings of an AI, Deep Learning, Mathematics addict
|
Elle O'Brien | 2.3K | 6 | https://towardsdatascience.com/romance-novels-generated-by-artificial-intelligence-1b31d9c872b2?source=tag_archive---------2---------------- | Romance Novels, Generated by Artificial Intelligence | I’ve always been fascinated with romance novels — the kind they sell at the drugstore for a couple of dollars, usually with some attractive, soft-lit couples on the cover. So when I started futzing around with text-generating neural networks a few weeks ago, I developed an urgent curiosity to discover what artificial intelligence could contribute to the ever-popular genre. Maybe one day there will be entire books written by computers. For now, let’s start with titles.
I gathered over 20,000 Harlequin Romance novel titles and gave them to a neural network, a type of artificial intelligence that learns the structure of text. It’s powerful enough to string together words in a way that seems almost human. 90% human. The other 10% is all wackiness.
I was not disappointed with what came out. I even photoshopped some of my favorites into existence (the author names are synthesized from machine learning, too). Let’s have a look by theme:
A common theme in romance novels is pregnancy, and the word “baby” had a strong showing in the titles I trained the neural network on. Naturally, the neural network came up with a lot of baby-themed titles:
There’s an unusually high concentration of sheikhs, vikings, and billionaires in the Harlequin world. Likewise, the neural network generated some colorful new bachelor-types:
I have so many questions. How is the prince pregnant? What sort of consulting does the count do? Who is Butterfly Earl? And what makes the sheikh’s desires so convenient?
Although there are exceptions, most romance novels end in happily-ever-afters. A lot of them even start with an unexpected wedding — a marriage of convenience, or a stipulation of a business contract, or a sham that turns into real love. The neural network seems to have internalized something about matrimony:
Doctors and surgeons are common paramours for mistresses headed towards the marriage valley:
Christmas is a magical time for surgeons, sheikhs, playboys, dads, consultants, and the women who love them:
What or where is Knith? I just like Mission: Christmas...
This neural network has never seen the big Montana sky, but it has some questionable ideas about cowboys:
The neural network generated some decidedly PG-13 titles:
They can’t all live happily ever after. Some of the generated titles sounded like M. Night Shyamalan was a collaborator:
How did the word “fear” get in there? It’s possible the network generated it without having “fear” in the training set, but a subset of the Harlequin empire is geared towards paranormal and gothic romance that might have included the word (*Note: I checked, and there was “Veil of Fear” published in 2012).
To wrap it up, some of the adorable failures and near-misses generated by the neural network:
I hope you’ve enjoyed computer-generated romance novel titles half as much as I have. Maybe someone out there can write about the Virgin Viking, or the Consultant Count, or the Baby Surgeon Seduction. I’d buy it.
I built a webscraper in Python (thanks, Beautiful Soup!) that grabbed about 20,000 romance novel titles published under the Harlequin brand off of FictionDB.com. Harlequin is, to me, synonymous with the romance genre, although it comprises only a fraction (albeit a healthy one) of the entire market. I fed this list of book titles into a recurrent neural network, using software I got from GitHub, and waited a few hours for the magic to happen. The model I fit was a 3-layer, 256-node recurrent neural network. I also trained the network on the author list in to create some new pen names. For more about the neural network I used, have a look at the fabulous work of Andrej Karpathy.
I discovered that “Surgery by the Sea” is actually a real novel, written by Sheila Douglas and published in 1979! So, this one isn’t an original neural network creation. Because the training set is rather small (only about 1 MB of text data), it’s to be expected that sometimes, the machine will spit out one of the titles it was trained on. One of the more challenging aspects of this project was discerning when that happened, since the real published titles can be more surprising than anything born out of artificial intelligence. For example: “The $4.98 Daddy” and “6'1” Grinch” are both real. In fact, the very first romance novel published by Harlequin was called “The Manatee”.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Computational scientist, software developer, science writer
Sharing concepts, ideas, and codes.
|
Slav Ivanov | 4.4K | 10 | https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607?source=tag_archive---------3---------------- | 37 Reasons why your Neural Network is not working – Slav | The network had been training for the last 12 hours. It all looked good: the gradients were flowing and the loss was decreasing. But then came the predictions: all zeroes, all background, nothing detected. “What did I do wrong?” — I asked my computer, who didn’t answer.
Where do you start checking if your model is outputting garbage (for example predicting the mean of all outputs, or it has really poor accuracy)?
A network might not be training for a number of reasons. Over the course of many debugging sessions, I would often find myself doing the same checks. I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be of use to you, too.
A lot of things can go wrong. But some of them are more likely to be broken than others. I usually start with this short list as an emergency first response:
If the steps above don’t do it, start going down the following big list and verify things one by one.
Check if the input data you are feeding the network makes sense. For example, I’ve more than once mixed the width and the height of an image. Sometimes, I would feed all zeroes by mistake. Or I would use the same batch over and over. So print/display a couple of batches of input and target output and make sure they are OK.
Try passing random numbers instead of actual data and see if the error behaves the same way. If it does, it’s a sure sign that your net is turning data into garbage at some point. Try debugging layer by layer /op by op/ and see where things go wrong.
Your data might be fine but the code that passes the input to the net might be broken. Print the input of the first layer before any operations and check it.
Check if a few input samples have the correct labels. Also make sure shuffling input samples works the same way for output labels.
Maybe the non-random part of the relationship between the input and output is too small compared to the random part (one could argue that stock prices are like this). I.e. the input are not sufficiently related to the output. There isn’t an universal way to detect this as it depends on the nature of the data.
This happened to me once when I scraped an image dataset off a food site. There were so many bad labels that the network couldn’t learn. Check a bunch of input samples manually and see if labels seem off.
The cutoff point is up for debate, as this paper got above 50% accuracy on MNIST using 50% corrupted labels.
If your dataset hasn’t been shuffled and has a particular order to it (ordered by label) this could negatively impact the learning. Shuffle your dataset to avoid this. Make sure you are shuffling input and labels together.
Are there a 1000 class A images for every class B image? Then you might need to balance your loss function or try other class imbalance approaches.
If you are training a net from scratch (i.e. not finetuning), you probably need lots of data. For image classification, people say you need a 1000 images per class or more.
This can happen in a sorted dataset (i.e. the first 10k samples contain the same class). Easily fixable by shuffling the dataset.
This paper points out that having a very large batch can reduce the generalization ability of the model.
Thanks to @hengcherkeng for this one:
Did you standardize your input to have zero mean and unit variance?
Augmentation has a regularizing effect. Too much of this combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit.
If you are using a pretrained model, make sure you are using the same normalization and preprocessing as the model was when training. For example, should an image pixel be in the range [0, 1], [-1, 1] or [0, 255]?
CS231n points out a common pitfall:
Also, check for different preprocessing in each sample or batch.
This will help with finding where the issue is. For example, if the target output is an object class and coordinates, try limiting the prediction to object class only.
Again from the excellent CS231n: Initialize with small parameters, without regularization. For example, if we have 10 classes, at chance means we will get the correct class 10% of the time, and the Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302.
After this, try increasing the regularization strength which should increase the loss.
If you implemented your own loss function, check it for bugs and add unit tests. Often, my loss would be slightly incorrect and hurt the performance of the network in a subtle way.
If you are using a loss function provided by your framework, make sure you are passing to it what it expects. For example, in PyTorch I would mix up the NLLLoss and CrossEntropyLoss as the former requires a softmax input and the latter doesn’t.
If your loss is composed of several smaller loss functions, make sure their magnitude relative to each is correct. This might involve testing different combinations of loss weights.
Sometimes the loss is not the best predictor of whether your network is training properly. If you can, use other metrics like accuracy.
Did you implement any of the layers in the network yourself? Check and double-check to make sure they are working as intended.
Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable.
Maybe the expressive power of your network is not enough to capture the target function. Try adding more layers or more hidden units in fully connected layers.
If your input looks like (k, H, W) = (64, 64, 64) it’s easy to miss errors related to wrong dimensions. Use weird numbers for input dimensions (for example, different prime numbers for each dimension) and check how they propagate through the network.
If you implemented Gradient Descent by hand, gradient checking makes sure that your backpropagation works like it should. More info: 1 2 3.
Overfit a small subset of the data and make sure it works. For example, train with just 1 or 2 examples and see if your network can learn to differentiate these. Move on to more samples per class.
If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps.
Maybe you using a particularly bad set of hyperparameters. If feasible, try a grid search.
Too much regularization can cause the network to underfit badly. Reduce regularization such as dropout, batch norm, weight/bias L2 regularization, etc. In the excellent “Practical Deep Learning for coders” course, Jeremy Howard advises getting rid of underfitting first. This means you overfit the training data sufficiently, and only then addressing overfitting.
Maybe your network needs more time to train before it starts making meaningful predictions. If your loss is steadily decreasing, let it train some more.
Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly.
Your choice of optimizer shouldn’t prevent your network from training unless you have selected particularly bad hyperparameters. However, the proper optimizer for a task can be helpful in getting the most training in the shortest amount of time. The paper which describes the algorithm you are using should specify the optimizer. If not, I tend to use Adam or plain SGD with momentum.
Check this excellent post by Sebastian Ruder to learn more about gradient descent optimizers.
A low learning rate will cause your model to converge very slowly.
A high learning rate will quickly decrease the loss in the beginning but might have a hard time finding a good solution.
Play around with your current learning rate by multiplying it by 0.1 or 10.
Getting a NaN (Non-a-Number) is a much bigger issue when training RNNs (from what I hear). Some approaches to fix it:
Did I miss anything? Is anything wrong? Let me know by leaving a reply below.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Entrepreneur / Hacker
Machine learning, Deep learning and other types of learning.
|
Slav Ivanov | 2.9K | 9 | https://blog.slavv.com/picking-a-gpu-for-deep-learning-3d4795c273b9?source=tag_archive---------4---------------- | Picking a GPU for Deep Learning – Slav | Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was building my personal Deep Learning box, I reviewed all the GPUs on the market. In this article, I’m going to share my insights about choosing the right graphics processor. Also, we’ll go over:
Deep Learning (DL) is part of the field of Machine Learning (ML). DL works by approximating a solution to a problem using neural networks. One of the nice properties of about neural networks is that they find patterns in the data (features) by themselves. This is opposed to having to tell your algorithm what to look for, as in the olde times. However, often this means the model starts with a blank state (unless we are transfer learning). To capture the nature of the data from scratch the neural net needs to process a lot of information. There are two ways to do so — with a CPU or a GPU.
The main computational module in a computer is the Central Processing Unit (better known as CPU). It is designed to do computation rapidly on a small amount of data. For example, multiplying a few numbers on a CPU is blazingly fast. But it struggles when operating on a large amount of data. E.g., multiplying matrices of tens or hundreds thousand numbers. Behind the scenes, DL is mostly comprised of operations like matrix multiplication.
Amusingly, 3D computer games rely on these same operations to render that beautiful landscape you see in Rise of the Tomb Raider. Thus, GPUs were developed to handle lots of parallel computations using thousands of cores. Also, they have a large memory bandwidth to deal with the data for these computations. This makes them the ideal commodity hardware to do DL on. Or at least, until ASICs for Machine Learning like Google’s TPU make their way to market.
For me, the most important reason for picking a powerful graphics processor is saving time while prototyping models. If the networks train faster the feedback time will be shorter. Thus, it would be easier for my brain to connect the dots between the assumptions I had for the model and its results.
See Tim Dettmers’ answer to “Why are GPUs well-suited to deep learning?” on Quora for a better explanation. Also for an in-depth, albeit slightly outdated GPUs comparison see his article “Which GPU(s) to Get for Deep Learning”.
There are main characteristics of a GPU related to DL are:
There are two reasons for having multiple GPUs: you want to train several models at once, or you want to do distributed training of a single model. We’ll go over each one.
Training several models at once is a great technique to test different prototypes and hyperparameters. It also shortens your feedback cycle and lets you try out many things at once.
Distributed training, or training a single network on several video cards is slowly but surely gaining traction. Nowadays, there are easy to use approaches to this for Tensorflow and Keras (via Horovod), CNTK and PyTorch. The distributed training libraries offer almost linear speed-ups to the number of cards. For example, with 2 GPUs you get 1.8x faster training.
PCIe Lanes (Updated): The caveat to using multiple video cards is that you need to be able to feed them with data. For this purpose, each GPU should have 16 PCIe lanes available for data transfer. Tim Dettmers points out that having 8 PCIe lanes per card should only decrease performance by “0–10%” for two GPUs.
For a single card, any desktop processor and chipset like Intel i5 7500 and Asus TUF Z270 will use 16 lanes.
However, for two GPUs, you can go 8x/8x lanes or get a processor AND a motherboard that support 32 PCIe lanes. 32 lanes are outside the realm of desktop CPUs. An Intel Xeon with a MSI — X99A SLI PLUS will do the job.
For 3 or 4 GPUs, go with 8x lanes per card with a Xeon with 24 to 32 PCIe lanes.
To have 16 PCIe lanes available for 3 or 4 GPUs, you need a monstrous processor. Something in the class of or AMD ThreadRipper (64 lanes) with a corresponding motherboard.
Also, for more GPUs you need a faster processor and hard disk to be able to feed them data quickly enough, so they don’t sit idle.
Nvidia has been focusing on Deep Learning for a while now, and the head start is paying off. Their CUDA toolkit is deeply entrenched. It works with all major DL frameworks — Tensoflow, Pytorch, Caffe, CNTK, etc. As of now, none of these work out of the box with OpenCL (CUDA alternative), which runs on AMD GPUs. I hope support for OpenCL comes soon as there are great inexpensive GPUs from AMD on the market. Also, some AMD cards support half-precision computation which doubles their performance and VRAM size.
Currently, if you want to do DL and want to avoid major headaches, choose Nvidia.
Your GPU needs a computer around it:
Hard Disk: First, you need to read the data off the disk. An SSD is recommended here, but an HDD can work as well.
CPU: That data might have to be decoded by the CPU (e.g. jpegs). Fortunately, any mid-range modern processor will do just fine.
Motherboard: The data passes via the motherboard to reach the GPU. For a single video card, almost any chipset will work. If you are planning on working with multiple graphic cards, read this section.
RAM: It is recommended to have 2 gigabytes of memory for every gigabyte of video card RAM. Having more certainly helps in some situations, like when you want to keep an entire dataset in memory.
Power supply: It should provide enough power for the CPU and the GPUs, plus 100 watts extra.
You can get all of this for $500 to $1000. Or even less if you buy a used workstation.
Here is performance comparison between all cards. Check the individual card profiles below. Notably, the performance of Titan XP and GTX 1080 Ti is very close despite the huge price gap between them.
The price comparison reveals that GTX 1080 Ti, GTX 1070 and GTX 1060 have great value for the compute performance they provide. All the cards are in the same league value-wise, except Titan XP.
The king of the hill. When every GB of VRAM matters, this card has more than any other on the (consumer) market. It’s only a recommended buy if you know why you want it.
For the price of Titan X, you could get two GTX 1080s, which is a lot of power and 16 GBs of VRAM.
This card is what I currently use. It’s a great high-end option, with lots of RAM and high throughput. Very good value.
I recommend this GPU if you can afford it. It works great for Computer Vision or Kaggle competitions.
Quite capable mid to high-end card. The price was reduced from $700 to $550 when 1080 Ti was introduced. 8 GB is enough for most Computer Vision tasks. People regularly compete on Kaggle with these.
The newest card in Nvidia’s lineup. If 1080 is over budget, this will get you the same amount of VRAM (8 GB). Also, 80% of the performance for 80% of the price. Pretty sweet deal.
It’s hard to get these nowadays because they are used for cryptocurrency mining. With a considerable amount of VRAM for this price but somewhat slower. If you can get it (or a couple) second-hand at a good price, go for it.
It’s quite cheap but 6 GB VRAM is limiting. That’s probably the minimum you want to have if you are doing Computer Vision. It will be okay for NLP and categorical data models.
Also available as P106–100 for cryptocurrency mining, but it’s the same card without a display output.
The entry-level card which will get you started but not much more. Still, if you are unsure about getting in Deep Learning, this might be a cheap way to get your feet wet.
Titan X Pascal It used to be the best consumer GPU Nvidia had to offer. Made obsolete by 1080 Ti, which has the same specs and is 40% cheaper.
Tesla GPUsThis includes K40, K80 (which is 2x K40 in one), P100, and others. You might already be using these via Amazon Web Services, Google Cloud Platform, or another cloud provider.
In my previous article, I did some benchmarks on GTX 1080 Ti vs. K40. The 1080 performed five times faster than the Tesla card and 2.5x faster than K80. K40 has 12 GB VRAM and K80 a whopping 24 GBs.
In theory, the P100 and GTX 1080 Ti should be in the same league performance-wise. However, this cryptocurrency comparison has P100 lagging in every benchmark. It is worth noting that you can do half-precision on P100, effectively doubling the performance and VRAM size.
On top of all this, K40 goes for over $2000, K80 for over $3000, and P100 is about $4500. And they get still get eaten alive by a desktop-grade card. Obviously, as it stands, I don’t recommend getting them.
All the specs in the world won’t help you if you don’t know what you are looking for. Here are my GPU recommendations depending on your budget:
I have over $1000: Get as many GTX 1080 Ti or GTX 1080 as you can. If you have 3 or 4 GPUs running in the same box, beware of issues with feeding them with data. Also keep in mind the airflow in the case and the space on the motherboard.
I have $700 to $900: GTX 1080 Ti is highly recommended. If you want to go multi-GPU, get 2x GTX 1070 (if you can find them) or 2x GTX 1070 Ti. Kaggle, here I come!
I have $400 to $700: Get the GTX 1080 or GTX 1070 Ti. Maybe 2x GTX 1060 if you really want 2 GPUs. However, know that 6 GB per model can be limiting.
I have $300 to $400: GTX 1060 will get you started. Unless you can find a used GTX 1070.
I have less than $300: Get GTX 1050 Ti or save for GTX 1060 if you are serious about Deep Learning.
Deep Learning has the great promise of transforming many areas of our life. Unfortunately, learning to wield this powerful tool, requires good hardware. Hopefully, I’ve given you some clarity on where to start in this quest.
Disclosure: The above are affiliate links, to help me pay for, well, more GPUs.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Entrepreneur / Hacker
Machine learning, Deep learning and other types of learning.
|
gk_ | 1.8K | 6 | https://machinelearnings.co/text-classification-using-neural-networks-f5cd7b8765c6?source=tag_archive---------5---------------- | Text Classification using Neural Networks – Machine Learnings | Understanding how chatbots work is important. A fundamental piece of machinery inside a chat-bot is the text classifier. Let’s look at the inner workings of an artificial neural network (ANN) for text classification.
We’ll use 2 layers of neurons (1 hidden layer) and a “bag of words” approach to organizing our training data. Text classification comes in 3 flavors: pattern matching, algorithms, neural nets. While the algorithmic approach using Multinomial Naive Bayes is surprisingly effective, it suffers from 3 fundamental flaws:
As with its ‘Naive’ counterpart, this classifier isn’t attempting to understand the meaning of a sentence, it’s trying to classify it. In fact so called “AI chat-bots” do not understand language, but that’s another story.
Let’s examine our text classifier one section at a time. We will take the following steps:
The code is here, we’re using iPython notebook which is a super productive way of working on data science projects. The code syntax is Python.
We begin by importing our natural language toolkit. We need a way to reliably tokenize sentences into words and a way to stem words.
And our training data, 12 sentences belonging to 3 classes (‘intents’).
We can now organize our data structures for documents, classes and words.
Notice that each word is stemmed and lower-cased. Stemming helps the machine equate words like “have” and “having”. We don’t care about case.
Our training data is transformed into “bag of words” for each sentence.
The above step is a classic in text classification: each training sentence is reduced to an array of 0’s and 1’s against the array of unique words in the corpus.
is stemmed:
then transformed to input: a 1 for each word in the bag (the ? is ignored)
and output: the first class
Note that a sentence could be given multiple classes, or none.
Make sure the above makes sense and play with the code until you grok it.
Next we have our core functions for our 2-layer neural network.
If you are new to artificial neural networks, here is how they work.
We use numpy because we want our matrix multiplication to be fast.
We use a sigmoid function to normalize values and its derivative to measure the error rate. Iterating and adjusting until our error rate is acceptably low.
Also below we implement our bag-of-words function, transforming an input sentence into an array of 0’s and 1’s. This matches precisely with our transform for training data, always crucial to get this right.
And now we code our neural network training function to create synaptic weights. Don’t get too excited, this is mostly matrix multiplication — from middle-school math class.
We are now ready to build our neural network model, we will save this as a json structure to represent our synaptic weights.
You should experiment with different ‘alpha’ (gradient descent parameter) and see how it affects the error rate. This parameter helps our error adjustment find the lowest error rate:
synapse_0 += alpha * synapse_0_weight_update
We use 20 neurons in our hidden layer, you can adjust this easily. These parameters will vary depending on the dimensions and shape of your training data, tune them down to ~10^-3 as a reasonable error rate.
The synapse.json file contains all of our synaptic weights, this is our model.
This classify() function is all that’s needed for the classification once synapse weights have been calculated: ~15 lines of code.
The catch: if there’s a change to the training data our model will need to be re-calculated. For a very large dataset this could take a non-insignificant amount of time.
We can now generate the probability of a sentence belonging to one (or more) of our classes. This is super fast because it’s dot-product calculation in our previously defined think() function.
Experiment with other sentences and different probabilities, you can then add training data and improve/expand the model. Notice the solid predictions with scant training data.
Some sentences will produce multiple predictions (above a threshold). You will need to establish the right threshold level for your application. Not all text classification scenarios are the same: some predictive situations require more confidence than others.
The last classification shows some internal details:
Notice the bag-of-words (bow) for the sentence, 2 words matched our corpus. The neural-net also learns from the 0’s, the non-matching words.
A low-probability classification is easily shown by providing a sentence where ‘a’ (common word) is the only match, for example:
Here you have a fundamental piece of machinery for building a chat-bot, capable of handling a large # of classes (‘intents’) and suitable for classes with limited or extensive training data (‘patterns’). Adding one or more responses to an intent is trivial.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Philosopher, Entrepreneur, Investor
Understand how machine learning and artificial intelligence will change your work & life.
|
nafrondel | 1.7K | 5 | https://medium.com/@nafrondel/you-requested-someone-with-a-degree-in-this-holds-up-hand-d4bf18e96ff?source=tag_archive---------6---------------- | You requested someone with a degree in this? *Holds up hand* | You requested someone with a degree in this? *Holds up hand*
So there are two main schools of Artificial Intelligence — Symbolic and non-symbolic.
Symbolic says the best way to make AI is to make an expert AI — e.g. if you want a doctor AI, you feed it medical text books and it answers questions by looking it up in the text book.
Non-symbolic says the best way to make AI is to decide that computers are better at understanding in computer, so give the information to the AI and let it turn that in to something it understands.
As a bit of an apt aside — consider the Chinese room thought experiment. Imagine you put someone in a room with shelves full of books. The books are filled with symbols and look up tables and the person inside is told “You will be given a sheet of paper with symbols on. Use the books in the room to look up the symbols to write in reply.” Then a person outside the room posts messages in to the room in Mandarin and gets messages back in Mandarin. The person inside the room doesn’t understand Mandarin, the knowledge is all in the books, but to the person outside the room it looks like they understand Mandarin.
That is how symbolic AI works. It has no inate knowledge of the subject mater, it just follows instructions. Even if some if those instructions are to update the books.
Non-symbolic AI says that it’d be better if the AI wrote the books itself. So looking back at the Chinese Room, this is like teaching the person in the room Mandarin, and the books are their study notes. The trouble is, teaching someone Mandarin takes time and effort as we’re starting with a blank slate here.
But consider that it takes decades to teach a child their first language, yet it takes only a little more effort to teach them a second language. So back to the AI — once we teach it one language, we want it to be like the child. We want it to be easy for it to learn a second language.
This is where Artificial Neural Networks come in. These are our blank slate children. They’re made up of three parts: Inputs, neurones, outputs. The neurones are where the magic happens — they’re modelled on brains. They’re a blob of neurones that can connect up to one another or cut links so they can join one bit of the brain up to another and let a signal go from one place to another. This is what joins the input up to the output. And in the pavlovian way, when something good happens, the brain remembers by strengthening the link between neurones. But just like a baby, these start out pretty much random so all you get out is baby babble. But we don’t want baby babble, we have to teach it how to get from dog to chien, not dog to goobababaa.
When teaching the ANN, you give it an input, and if the output is wrong, give it a tap on the nose and the neurones remember “whatever we just did was wrong, don’t do it again” by decreasing the value it has on the links between the neurones that led to the wrong answer and of it gets it right, give it a rub on the head and it does the opposite, it increases the numbers, meaning it’ll be more likely to take that path next time. This means that over time, it’ll join up the input Dog to the output Chien.
So how does this explain the article?
Well. ANNs work in both directions, we can give it outputs and it’ll give us back inputs by following the path of neurones back in the opposite direction. So by teaching it Dog means Chien, it also knows Chien could mean Dog. That also means we can teach it that Perro means Dog when we’re speaking Spanish. So when we teach it, the fastest way for it to go from Perro to Dog is to follow the same path that took Chien to Dog. Meaning over time it will pull the neurones linking Chien and Dog closer to Perro as well, which links Perro to Chien as well.
This three way link in the middle of Perro, Dog and Chien is the language the google AI is creating for itself.
Backing up a bit to our imaginary child learning a new language, when they learn their first language (e.g. English), they don’t write an English dictionary in their head, they hear the words and map them to an idea that the words represent. This is why people frequently misquote films, they remember what the quote meant, not what the words were. So when the child learns a second language, they hear Chien as being French, but map it to the idea of dog. Then when they hear Perro they hear it as Spanish but map that to the idea of dog too. This means the child only has to learn about the idea of a dog once, but can then link that idea up to many languages or synonyms for dog. And this is what the Google AI is doing. Instead of thinking if dog=chien, and chien=perro, perro must = dog, it thinks dog=0x3b chien =0x3b perro=0x3b. Where 0x3b is the idea of dog, meaning it can then turn 0x3b in to whichever language you ask for.
Tl;Dr: It wasn’t big news because Artificial Neural Networks have been doing this since they were invented in the 40s. And the entire non-symbolic branch of AI is all about having computers invent their own language to understand and learn things.
P.S. It really is smart enough to warrant that excitement! Most people have no idea how much they rely on AI. From the relatively simple AI that runs their washing machine, to the AI that reads the address hand written on mail and then figures out the best way to deliver it. These are real everyday machines making decisions for us. Even your computer mouse has AI in it to determine what you wanted to point at rather than what you actually pointed at (on a 1080p screen, there are 2 million points you could click on, it’s not by accident that it’s pretty easy to pick the correct one). Mobile phones constantly run AI to decide which phone tower to connect to, while the backbone of the internet is a huge interconnected AI deciding the fastest way to get data from one computer to another. Thinking, decision making AI is in our hands, beneath our feet, in our cars and almost every electronic device we have.
The robots have already taken over ;)
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
|
Neelabh Pant | 2K | 11 | https://blog.statsbot.co/time-series-prediction-using-recurrent-neural-networks-lstms-807fa6ca7f?source=tag_archive---------7---------------- | A Guide For Time Series Prediction Using Recurrent Neural Networks (LSTMs) | The Statsbot team has already published the article about using time series analysis for anomaly detection. Today, we’d like to discuss time series prediction with a long short-term memory model (LSTMs). We asked a data scientist, Neelabh Pant, to tell you about his experience of forecasting exchange rates using recurrent neural networks.
As an Indian guy living in the US, I have a constant flow of money from home to me and vice versa. If the USD is stronger in the market, then the Indian rupee (INR) goes down, hence, a person from India buys a dollar for more rupees. If the dollar is weaker, you spend less rupees to buy the same dollar.
If one can predict how much a dollar will cost tomorrow, then this can guide one’s decision making and can be very important in minimizing risks and maximizing returns. Looking at the strengths of a neural network, especially a recurrent neural network, I came up with the idea of predicting the exchange rate between the USD and the INR.
There are a lot of methods of forecasting exchange rates such as:
In this article, we’ll tell you how to predict the future exchange rate behavior using time series analysis and by making use of machine learning with time series.
Let us begin by talking about sequence problems. The simplest machine learning problem involving a sequence is a one to one problem.
In this case, we have one data input or tensor to the model and the model generates a prediction with the given input. Linear regression, classification, and even image classification with convolutional network fall into this category. We can extend this formulation to allow for the model to make use of the pass values of the input and the output.
It is known as the one to many problem. The one to many problem starts like the one to one problem where we have an input to the model and the model generates one output. However, the output of the model is now fed back to the model as a new input. The model now can generate a new output and we can continue like this indefinitely. You can now see why these are known as recurrent neural networks.
A recurrent neural network deals with sequence problems because their connections form a directed cycle. In other words, they can retain state from one iteration to the next by using their own output as input for the next step. In programming terms this is like running a fixed program with certain inputs and some internal variables. The simplest recurrent neural network can be viewed as a fully connected neural network if we unroll the time axes.
In this univariate case only two weights are involved. The weight multiplying the current input xt, which is u, and the weight multiplying the previous output yt-1, which is w. This formula is like the exponential weighted moving average (EWMA) by making its pass values of the output with the current values of the input.
One can build a deep recurrent neural network by simply stacking units to one another. A simple recurrent neural network works well only for a short-term memory. We will see that it suffers from a fundamental problem if we have a longer time dependency.
As we have talked about, a simple recurrent network suffers from a fundamental problem of not being able to capture long-term dependencies in a sequence. This is a problem because we want our RNNs to analyze text and answer questions, which involves keeping track of long sequences of words.
In late ’90s, LSTM was proposed by Sepp Hochreiter and Jurgen Schmidhuber, which is relatively insensitive to gap length over alternatives RNNs, hidden markov models, and other sequence learning methods in numerous applications.
This model is organized in cells which include several operations. LSTM has an internal state variable, which is passed from one cell to another and modified by Operation Gates.
1. Forget Gate
It is a sigmoid layer that takes the output at t-1 and the current input at time t and concatenates them into a single tensor and applies a linear transformation followed by a sigmoid. Because of the sigmoid, the output of this gate is between 0 and 1. This number is multiplied with the internal state and that is why the gate is called a forget gate. If ft=0 then the previous internal state is completely forgotten, while if ft=1 it will be passed through unaltered.
2. Input Gate
The input gate takes the previous output and the new input and passes them through another sigmoid layer. This gate returns a value between 0 and 1. The value of the input gate is multiplied with the output of the candidate layer.
This layer applies a hyperbolic tangent to the mix of input and previous output, returning a candidate vector to be added to the internal state.
The internal state is updated with this rule:
.The previous state is multiplied by the forget gate and then added to the fraction of the new candidate allowed by the output gate.
3. Output Gate
This gate controls how much of the internal state is passed to the output and it works in a similar way to the other gates.
These three gates described above have independent weights and biases, hence the network will learn how much of the past output to keep, how much of the current input to keep, and how much of the internal state to send out to the output.
In a recurrent neural network, you not only give the network the data, but also the state of the network one moment before. For example, if I say “Hey! Something crazy happened to me when I was driving” there is a part of your brain that is flipping a switch that’s saying “Oh, this is a story Neelabh is telling me. It is a story where the main character is Neelabh and something happened on the road.” Now, you carry a little part of that one sentence I just told you. As you listen to all my other sentences you have to keep a bit of information from all past sentences around in order to understand the entire story.
Another example is video processing, where you would again need a recurrent neural network. What happens in the current frame is heavily dependent upon what was in the last frame of the movie most of the time. Over a period of time, a recurrent neural network tries to learn what to keep and how much to keep from the past, and how much information to keep from the present state, which makes it so powerful as compared to a simple feed forward neural network.
I was impressed with the strengths of a recurrent neural network and decided to use them to predict the exchange rate between the USD and the INR. The dataset used in this project is the exchange rate data between January 2, 1980 and August 10, 2017. Later, I’ll give you a link to download this dataset and experiment with it.
The dataset displays the value of $1 in rupees. We have a total of 13,730 records starting from January 2, 1980 to August 10, 2017.
Over the period, the price to buy $1 in rupees has been rising. One can see that there was a huge dip in the American economy during 2007–2008, which was hugely caused by the great recession during that period. It was a period of general economic decline observed in world markets during the late 2000s and early 2010s.
This period was not very good for the world’s developed economies, particularly in North America and Europe (including Russia), which fell into a definitive recession. Many of the newer developed economies suffered far less impact, particularly China and India, whose economies grew substantially during this period.
Now, to train the machine we need to divide the dataset into test and training sets. It is very important when you do time series to split train and test with respect to a certain date. So, you don’t want your test data to come before your training data.
In our experiment, we will define a date, say January 1, 2010, as our split date. The training data is the data between January 2, 1980 and December 31, 2009, which are about 11,000 training data points.
The test dataset is between January 1, 2010 and August 10, 2017, which are about 2,700 points.
The next thing to do is normalize the dataset. You only need to fit and transform your training data and just transform your test data. The reason you do that is you don’t want to assume that you know the scale of your test data.
Normalizing or transforming the data means that the new scale variables will be between zero and one.
A fully Connected Model is a simple neural network model which is built as a simple regression model that will take one input and will spit out one output. This basically takes the price from the previous day and forecasts the price of the next day.
As a loss function, we use mean squared error and stochastic gradient descent as an optimizer, which after enough numbers of epochs will try to look for a good local optimum. Below is the summary of the fully connected layer.
After training this model for 200 epochs or early_callbacks (whichever came first), the model tries to learn the pattern and the behavior of the data. Since we split the data into training and testing sets we can now predict the value of testing data and compare them with the ground truth.
As you can see, the model is not good. It essentially is repeating the previous values and there is a slight shift. The fully connected model is not able to predict the future from the single previous value. Let us now try using a recurrent neural network and see how well it does.
The recurrent model we have used is a one layer sequential model. We used 6 LSTM nodes in the layer to which we gave input of shape (1,1), which is one input given to the network with one value.
The last layer is a dense layer where the loss is mean squared error with stochastic gradient descent as an optimizer. We train this model for 200 epochs with early_stopping callback. The summary of the model is shown above.
This model has learned to reproduce the yearly shape of the data and doesn’t have the lag it used to have with a simple feed forward neural network. It is still underestimating some observations by certain amounts and there is definitely room for improvement in this model.
There can be a lot of changes to be made in this model to make it better. One can always try to change the configuration by changing the optimizer. Another important change I see is by using the Sliding Time Window method, which comes from the field of stream data management system.
This approach comes from the idea that only the most recent data are important. One can show the model data from a year and try to make a prediction for the first day of the next year. Sliding time window methods are very useful in terms of fetching important patterns in the dataset that are highly dependent on the past bulk of observations.
Try to make changes to this model as you like and see how the model reacts to those changes.
I made the dataset available on my github account under deep learning in python repository. Feel free to download the dataset and play with it.
I personally follow some of my favorite data scientists like Kirill Eremenko, Jose Portilla, Dan Van Boxel (better known as Dan Does Data), and many more. Most of them are available on different podcast stations where they talk about different current subjects like RNN, Convolutional Neural Networks, LSTM, and even the most recent technology, Neural Turing Machine.
Try to keep up with the news of different artificial intelligence conferences. By the way, if you are interested, then Kirill Eremenko is coming to San Diego this November with his amazing team to give talks on Machine Learning, Neural Networks, and Data Science.
LSTM models are powerful enough to learn the most important past behaviors and understand whether or not those past behaviors are important features in making future predictions. There are several applications where LSTMs are highly used. Applications like speech recognition, music composition, handwriting recognition, and even in my current research of human mobility and travel predictions.
According to me, LSTM is like a model which has its own memory and which can behave like an intelligent human in making decisions.
Thank you again and happy machine learning!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
I love Data Science. Let’s build some intelligent bots together! ;)
Data stories on machine learning and analytics. From Statsbot’s makers.
|
Eugenio Culurciello | 2.2K | 15 | https://towardsdatascience.com/neural-network-architectures-156e5bad51ba?source=tag_archive---------8---------------- | Neural Network Architectures – Towards Data Science | Deep neural networks and Deep Learning are powerful and popular algorithms. And a lot of their success lays in the careful design of the neural network architecture.
I wanted to revisit the history of neural network design in the last few years and in the context of Deep Learning.
For a more in-depth analysis and comparison of all the networks reported here, please see our recent article. One representative figure from this article is here:
Reporting top-1 one-crop accuracy versus amount of operations required for a single forward pass in multiple popular neural network architectures.
It is the year 1994, and this is one of the very first convolutional neural networks, and what propelled the field of Deep Learning. This pioneering work by Yann LeCun was named LeNet5 after many previous successful iterations since the year 1988!
The LeNet5 architecture was fundamental, in particular the insight that image features are distributed across the entire image, and convolutions with learnable parameters are an effective way to extract similar features at multiple location with few parameters. At the time there was no GPU to help training, and even CPUs were slow. Therefore being able to save parameters and computation was a key advantage. This is in contrast to using each pixel as a separate input of a large multi-layer neural network. LeNet5 explained that those should not be used in the first layer, because images are highly spatially correlated, and using individual pixel of the image as separate input features would not take advantage of these correlations.
LeNet5 features can be summarized as:
In overall this network was the origin of much of the recent architectures, and a true inspiration for many people in the field.
In the years from 1998 to 2010 neural network were in incubation. Most people did not notice their increasing power, while many other researchers slowly progressed. More and more data was available because of the rise of cell-phone cameras and cheap digital cameras. And computing power was on the rise, CPUs were becoming faster, and GPUs became a general-purpose computing tool. Both of these trends made neural network progress, albeit at a slow rate. Both data and computing power made the tasks that neural networks tackled more and more interesting. And then it became clear...
In 2010 Dan Claudiu Ciresan and Jurgen Schmidhuber published one of the very fist implementations of GPU Neural nets. This implementation had both forward and backward implemented on a a NVIDIA GTX 280 graphic processor of an up to 9 layers neural network.
In 2012, Alex Krizhevsky released AlexNet which was a deeper and much wider version of the LeNet and won by a large margin the difficult ImageNet competition.
AlexNet scaled the insights of LeNet into a much larger neural network that could be used to learn much more complex objects and object hierarchies. The contribution of this work were:
At the time GPU offered a much larger number of cores than CPUs, and allowed 10x faster training time, which in turn allowed to use larger datasets and also bigger images.
The success of AlexNet started a small revolution. Convolutional neural network were now the workhorse of Deep Learning, which became the new name for “large neural networks that can now solve useful tasks”.
In December 2013 the NYU lab from Yann LeCun came up with Overfeat, which is a derivative of AlexNet. The article also proposed learning bounding boxes, which later gave rise to many other papers on the same topic. I believe it is better to learn to segment objects rather than learn artificial bounding boxes.
The VGG networks from Oxford were the first to use much smaller 3×3 filters in each convolutional layers and also combined them as a sequence of convolutions.
This seems to be contrary to the principles of LeNet, where large convolutions were used to capture similar features in an image. Instead of the 9×9 or 11×11 filters of AlexNet, filters started to become smaller, too dangerously close to the infamous 1×1 convolutions that LeNet wanted to avoid, at least on the first layers of the network. But the great advantage of VGG was the insight that multiple 3×3 convolution in sequence can emulate the effect of larger receptive fields, for examples 5×5 and 7×7. These ideas will be also used in more recent network architectures as Inception and ResNet.
The VGG networks uses multiple 3x3 convolutional layers to represent complex features. Notice blocks 3, 4, 5 of VGG-E: 256×256 and 512×512 3×3 filters are used multiple times in sequence to extract more complex features and the combination of such features. This is effectively like having large 512×512 classifiers with 3 layers, which are convolutional! This obviously amounts to a massive number of parameters, and also learning power. But training of these network was difficult, and had to be split into smaller networks with layers added one by one. All this because of the lack of strong ways to regularize the model, or to somehow restrict the massive search space promoted by the large amount of parameters.
VGG used large feature sizes in many layers and thus inference was quite costly at run-time. Reducing the number of features, as done in Inception bottlenecks, will save some of the computational cost.
Network-in-network (NiN) had the great and simple insight of using 1x1 convolutions to provide more combinational power to the features of a convolutional layers.
The NiN architecture used spatial MLP layers after each convolution, in order to better combine features before another layer. Again one can think the 1x1 convolutions are against the original principles of LeNet, but really they instead help to combine convolutional features in a better way, which is not possible by simply stacking more convolutional layers. This is different from using raw pixels as input to the next layer. Here 1×1 convolution are used to spatially combine features across features maps after convolution, so they effectively use very few parameters, shared across all pixels of these features!
The power of MLP can greatly increase the effectiveness of individual convolutional features by combining them into more complex groups. This idea will be later used in most recent architectures as ResNet and Inception and derivatives.
NiN also used an average pooling layer as part of the last classifier, another practice that will become common. This was done to average the response of the network to multiple are of the input image before classification.
Christian Szegedy from Google begun a quest aimed at reducing the computational burden of deep neural networks, and devised the GoogLeNet the first Inception architecture.
By now, Fall 2014, deep learning models were becoming extermely useful in categorizing the content of images and video frames. Most skeptics had given in that Deep Learning and neural nets came back to stay this time. Given the usefulness of these techniques, the internet giants like Google were very interested in efficient and large deployments of architectures on their server farms.
Christian thought a lot about ways to reduce the computational burden of deep neural nets while obtaining state-of-art performance (on ImageNet, for example). Or be able to keep the computational cost the same, while offering improved performance.
He and his team came up with the Inception module:
which at a first glance is basically the parallel combination of 1×1, 3×3, and 5×5 convolutional filters. But the great insight of the inception module was the use of 1×1 convolutional blocks (NiN) to reduce the number of features before the expensive parallel blocks. This is commonly referred as “bottleneck”. This deserves its own section to explain: see “bottleneck layer” section below.
GoogLeNet used a stem without inception modules as initial layers, and an average pooling plus softmax classifier similar to NiN. This classifier is also extremely low number of operations, compared to the ones of AlexNet and VGG. This also contributed to a very efficient network design.
Inspired by NiN, the bottleneck layer of Inception was reducing the number of features, and thus operations, at each layer, so the inference time could be kept low. Before passing data to the expensive convolution modules, the number of features was reduce by, say, 4 times. This led to large savings in computational cost, and the success of this architecture.
Let’s examine this in detail. Let’s say you have 256 features coming in, and 256 coming out, and let’s say the Inception layer only performs 3x3 convolutions. That is 256x256 x 3x3 convolutions that have to be performed (589,000s multiply-accumulate, or MAC operations). That may be more than the computational budget we have, say, to run this layer in 0.5 milli-seconds on a Google Server. Instead of doing this, we decide to reduce the number of features that will have to be convolved, say to 64 or 256/4. In this case, we first perform 256 -> 64 1×1 convolutions, then 64 convolution on all Inception branches, and then we use again a 1x1 convolution from 64 -> 256 features back again. The operations are now:
For a total of about 70,000 versus the almost 600,000 we had before. Almost 10x less operations!
And although we are doing less operations, we are not losing generality in this layer. In fact the bottleneck layers have been proven to perform at state-of-art on the ImageNet dataset, for example, and will be also used in later architectures such as ResNet.
The reason for the success is that the input features are correlated, and thus redundancy can be removed by combining them appropriately with the 1x1 convolutions. Then, after convolution with a smaller number of features, they can be expanded again into meaningful combination for the next layer.
Christian and his team are very efficient researchers. In February 2015 Batch-normalized Inception was introduced as Inception V2. Batch-normalization computes the mean and standard-deviation of all feature maps at the output of a layer, and normalizes their responses with these values. This corresponds to “whitening” the data, and thus making all the neural maps have responses in the same range, and with zero mean. This helps training as the next layer does not have to learn offsets in the input data, and can focus on how to best combine features.
In December 2015 they released a new version of the Inception modules and the corresponding architecture This article better explains the original GoogLeNet architecture, giving a lot more detail on the design choices. A list of the original ideas are:
Inception still uses a pooling layer plus softmax as final classifier.
The revolution then came in December 2015, at about the same time as Inception v3. ResNet have a simple ideas: feed the output of two successive convolutional layer AND also bypass the input to the next layers!
This is similar to older ideas like this one. But here they bypass TWO layers and are applied to large scales. Bypassing after 2 layers is a key intuition, as bypassing a single layer did not give much improvements. By 2 layers can be thought as a small classifier, or a Network-In-Network!
This is also the very first time that a network of > hundred, even 1000 layers was trained.
ResNet with a large number of layers started to use a bottleneck layer similar to the Inception bottleneck:
This layer reduces the number of features at each layer by first using a 1x1 convolution with a smaller output (usually 1/4 of the input), and then a 3x3 layer, and then again a 1x1 convolution to a larger number of features. Like in the case of Inception modules, this allows to keep the computation low, while providing rich combination of features. See “bottleneck layer” section after “GoogLeNet and Inception”.
ResNet uses a fairly simple initial layers at the input (stem): a 7x7 conv layer followed with a pool of 2. Contrast this to more complex and less intuitive stems as in Inception V3, V4.
ResNet also uses a pooling layer plus softmax as final classifier.
Additional insights about the ResNet architecture are appearing every day:
And Christian and team are at it again with a new version of Inception.
The Inception module after the stem is rather similar to Inception V3:
They also combined the Inception module with the ResNet module:
This time though the solution is, in my opinion, less elegant and more complex, but also full of less transparent heuristics. It is hard to understand the choices and it is also hard for the authors to justify them.
In this regard the prize for a clean and simple network that can be easily understood and modified now goes to ResNet.
SqueezeNet has been recently released. It is a re-hash of many concepts from ResNet and Inception, and show that after all, a better design of architecture will deliver small network sizes and parameters without needing complex compression algorithms.
Our team set up to combine all the features of the recent architectures into a very efficient and light-weight network that uses very few parameters and computation to achieve state-of-the-art results. This network architecture is dubbed ENet, and was designed by Adam Paszke. We have used it to perform pixel-wise labeling and scene-parsing. Here are some videos of ENet in action. These videos are not part of the training dataset.
The technical report on ENet is available here. ENet is a encoder plus decoder network. The encoder is a regular CNN design for categorization, while the decoder is a upsampling network designed to propagate the categories back into the original image size for segmentation. This worked used only neural networks, and no other algorithm to perform image segmentation.
As you can see in this figure ENet has the highest accuracy per parameter used of any neural network out there!
ENet was designed to use the minimum number of resources possible from the start. As such it achieves such a small footprint that both encoder and decoder network together only occupies 0.7 MB with fp16 precision. Even at this small size, ENet is similar or above other pure neural network solutions in accuracy of segmentation.
A systematic evaluation of CNN modules has been presented. The found out that is advantageous to use:
• use ELU non-linearity without batchnorm or ReLU with it.
• apply a learned colorspace transformation of RGB.
• use the linear learning rate decay policy.
• use a sum of the average and max pooling layers.
• use mini-batch size around 128 or 256. If this is too big for your GPU, decrease the learning rate proportionally to the batch size.
• use fully-connected layers as convolutional and average the predictions for the final decision.
• when investing in increasing training set size, check if a plateau has not been reach. • cleanliness of the data is more important then the size.
• if you cannot increase the input image size, reduce the stride in the con- sequent layers, it has roughly the same effect.
• if your network has a complex and highly optimized architecture, like e.g. GoogLeNet, be careful with modifications.
Xception improves on the inception module and architecture with a simple and more elegant architecture that is as effective as ResNet and Inception V4.
The Xception module is presented here:
This network can be anyone’s favorite given the simplicity and elegance of the architecture, presented here:
The architecture has 36 convolutional stages, making it close in similarity to a ResNet-34. But the model and code is as simple as ResNet and much more comprehensible than Inception V4.
A Torch7 implementation of this network is available here An implementation in Keras/TF is availble here.
It is interesting to note that the recent Xception architecture was also inspired by our work on separable convolutional filters.
A new MobileNets architecture is also available since April 2017. This architecture uses separable convolutions to reduce the number of parameters. The separate convolution is the same as Xception above. Now the claim of the paper is that there is a great reduction in parameters — about 1/2 in case of FaceNet, as reported in the paper. Here is the complete model architecture:
Unfortunately, we have tested this network in actual application and found it to be abysmally slow on a batch of 1 on a Titan Xp GPU. Look at a comparison here of inference time per image:
Clearly this is not a contender in fast inference! It may reduce the parameters and size of network on disk, but is not usable.
FractalNet uses a recursive architecture, that was not tested on ImageNet, and is a derivative or the more general ResNet.
We believe that crafting neural network architectures is of paramount importance for the progress of the Deep Learning field. Our group highly recommends reading carefully and understanding all the papers in this post.
But one could now wonder why we have to spend so much time in crafting architectures, and why instead we do not use data to tell us what to use, and how to combine modules. This would be nice, but now it is work in progress. Some initial interesting results are here.
Note also that here we mostly talked about architectures for computer vision. Similarly neural network architectures developed in other areas, and it is interesting to study the evolution of architectures for all other tasks also.
If you are interested in a comparison of neural network architecture and computational performance, see our recent paper.
This post was inspired by discussions with Abhishek Chaurasia, Adam Paszke, Sangpil Kim, Alfredo Canziani and others in our e-Lab at Purdue University.
I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more...
If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
I dream and build new technology
Sharing concepts, ideas, and codes.
|
Gary Marcus | 1.3K | 27 | https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1?source=tag_archive---------0---------------- | In defense of skepticism about deep learning – Gary Marcus – Medium | In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.”
In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). I also urged the community to consider incorporating more innate structure into AI systems.
Within a few days, thousands of people had weighed in over Twitter, some enthusiastic (“e.g, the best discussion of #DeepLearning and #AI I’ve read in many years”), some not (“Thoughtful... But mostly wrong nevertheless”).
Because I think clarity around these issues is so important, I’ve compiled a list of fourteen commonly-asked queries. Where does unsupervised learning fit in? Why didn’t I say more nice things about deep learning? What gives me the right to talk about this stuff in the first place? What’s up with asking a neural network to generalize from even numbers to odd numbers? (Hint: that’s the most important one). And lots more. I haven’t addressed literally every question I have seen, but I have tried to be representative.
1. What is general intelligence?
Thomas Dietterich, an eminent professor of machine learning, and my most thorough and explicit critic thus far, gave a nice answer that I am very comfortable with:
2. Marcus wasn’t very nice to deep learning. He should have said more nice things about all of its vast accomplishments. And he minimizes others.
Dietterich, mentioned above, made both of these points, writing:
On the first part of that, true, I could have said more positive things. But it’s not like I didn’t say any. Or even like I forgot to mention Dietterich’s best example; I mentioned it on the first page:
More generally, later in the article I cited a couple of great texts and excellent blogs that have pointers to numerous examples. A lot of them though, would not really count as AGI, which was the main focus of my paper. (Google Translate, for example, is extremely impressive, but it’s not general; it can’t, for example, answer questions about what it has translated, the way a human translator could.)
The second part is more substantive. Is 1,000 categories really very finite? Well, yes, compared to the flexibility of cognition. Cognitive scientists generally place the number of atomic concepts known by an individual as being on the order of 50,000, and we can easily compose those into a vastly greater number of complex thoughts. Pets and fish are probably counted in those 50,000; pet fish, which is something different, probably isn’t counted. And I can easily entertain the concept of “a pet fish that is suffering from Ick”, or note that “it is always disappointing to buy a pet fish only to discover that it was infected with Ick” (an experience that I had as a child and evidently still resent). How many ideas like that I can express? It’s a lot more than 1,000.
I am not precisely sure how many visual categories a person can recognize, but suspect the math is roughly similar. Try google images on “pet fish”, and you do ok; try it on “pet fish wearing goggles” and you mostly find dogs wearing goggles, with a false alarm rate of over 80%.
Machines win over nonexpert humans on distinguishing similar dog breeds, but people win, by a wide margin, on interpreting complex scenes, like what would happen to a skydiver who was wearing a backpack rather than a parachute.
In focusing on 1,000 category chunks the machine learning field is, in my view, doing itself a disservice, trading a short-term feeling of success for a denial of harder, more open-ended problems (like scene and sentence comprehension) that must eventually be addressed. Compared to the essentially infinite range of sentences and scenes we can see and comprehend, 1000 of anything really is small. [See also Note 2 at bottom]
3. Marcus says deep learning is useless, but it’s great for many things
Of course it is useful; I never said otherwise, only that (a) in its current supervised form, deep learning might be approaching its limits and (b) that those limits would stop short from full artificial general intelligence — unless, maybe, we started incorporating a bunch of other stuff like symbol-manipulation and innateness.
The core of my conclusion was this:
4. “One thing that I don’t understand. — @GaryMarcus says that DL is not good for hierarchical structures. But in @ylecun nature review paper [says that] that DL is particularly suited for exploiting such hierarchies.”
This is an astute question, from Ram Shankar, and I should have been a LOT clearer about the answer: there are many different types of hierarchy one could think about. Deep learning is really good, probably the best ever, at the sort of feature-wise hierarchy LeCun talked about, which I typically refer to as hierarchical feature detection; you build lines out of pixels, letters out of lines, words out of letters and so forth. Kurzweil and Hawkins have emphasized this sort of thing, too, and it really goes back to Hubel and Wiesel (1959)in neuroscience experiments and to Fukushima. (Fukushima, Miyake, & Ito, 1983) in AI. Fukushima, in his Neocognitron model, hand-wired his hierarchy of successively more abstract features; LeCun and many others after showed that (at least in some cases) you don’t have to hand engineer them.
But you don’t have to keep track of the subcomponents you encounter along the way; the top-level system need not explicitly encode the structure of the overall output in terms of which parts were seen along the way; this is part of why a deep learning system can be fooled into thinking a pattern of a black and yellow stripes is a school bus. (Nguyen, Yosinski, & Clune, 2014). That stripe pattern is strongly correlated with activation of the school bus output units, which is in turn correlated with a bunch of lower-level features, but in a typical image-recognition deep network, there is no fully-realized representation of a school bus as being made up of wheels, a chassis, windows, etc. Virtually the whole spoofing literature can be thought of in these terms. [Note 3]
The structural sense of hierarchy which I was discussing was different, and focused around systems that can make explicit reference to the parts of larger wholes. The classic illustration would be Chomsky’s sense of hierarchy, in which a sentence is composed of increasingly complex grammatical units (e.g., using a novel phrase like the man who mistook his hamburger for a hot dog with a larger sentence like The actress insisted that she would not be outdone by the man who mistook his hamburger for a hot dog). I don’t think deep learning does well here (e.g., in discerning the relation between the actress, the man, and the misidentified hot dog), though attempts have certainly been made.
Even in vision, the problem is not entirely licked; Hinton’s recent capsule work (Sabour, Frosst, & Hinton, 2017), for example, is an attempt to build in more robust part-whole directions for image recognition, by using more structured networks. I see this as a good trend, and one potential way to begin to address the spoofing problem, but also as a reflection of trouble with the standard deep learning approach.
5. “It’s weird to discuss deep learning in [the] context of general AI. General AI is not the goal of deep learning!”
Best twitter response to this came from University of Quebec professor Daniel Lemire: “Oh! Come on! Hinton, Bengio... are openly going for a model of human intelligence.”
Second prize goes to a math PhD at Google, Jeremy Kun, who countered the dubious claim that “General AI is not the goal of deep learning” with “If that’s true, then deep learning experts sure let everyone believe it is without correcting them.”
Andrew Ng’s recent Harvard Business Review article, which I cited, implies that deep learning can do anything a person can do in a second. Thomas Dietterich’s tweet that said in part “it is hard to argue that there are limits to DL”. Jeremy Howard worried that the idea that deep learning is overhyped might itself be overhyped, and then suggested that every known limit had been countered.
DeepMind’s recent AlphaGo paper [See Note 4] is positioned somewhat similarly, with Silver et al (Silver et al., 2017) enthusiastically reporting that:
In that paper’s concluding discussion, not one of the 10 challenges to deep learning that I reviewed was mentioned. (As I will discuss in a paper coming out soon, it’s not actually a pure deep learning system, but that’s a story for another day.)
The main reason people keep benchmarking their AI systems against humans is precisely because AGI is the goal.
6. What Marcus said is a problem with supervised learning, not deep learning.
Yann LeCun presented a version of this, in a comment on my Facebook page:
The part about my allegedly not recognizing LeCun’s recent work is, well, odd. It’s true that I couldn’t find a good summary article to cite (when I asked LeCun, he told me by email that there wasn’t one yet) but I did mention his interest explicitly:
I also noted that:
My conclusion was positive, too. Although I expressed reservations about current approaches to building unsupervised systems, I ended optimistically:
What LeCun’s remark does get right is that many of the problems I addressed are a general problem with supervised learning, not something unique to deep learning; I could have been more clear about this. Many other supervised learning techniques face similar challenges, such as problems in generalization and dependence on massive data sets; relatively little of what I said is unique to deep learning. In my focus on assessing deep learning at the five year resurgence mark, I neglected to say that.
But it doesn’t really help deep learning that other supervised learning techniques are in the same boat. If someone could come up with a truly impressive way of using deep learning in an unsupervised way, a reassessment might be required. But I don’t see that unsupervised learning, at least as it currently pursued, particularly remedies the challenges I raised, e.g., with respect to reasoning, hierarchical representations, transfer, robustness, and interpretability. It’s simply a promissory note. [Note 5]
As Portland State and Santa Fe Institute Professor Melanie Mitchell’s put it in a thus far unanswered tweet:
I would, too.
In the meantime, I see no principled reason to believe that unsupervised learning can solve the problems I raise, unless we add in more abstract, symbolic representations, first.
7. Deep learning is not just convolutional networks [of the sort Marcus critiqued], it’s “essentially a new style of programming — ”differentiable programming” — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc” — Tom Dietterich
This seemed (in the context of Dietterich’s longer series of tweets) to have been proposed as a criticism, but I am puzzled by that, as I am a fan of differentiable programming and said so. Perhaps the point was that deep learning can be taken in a broader way.
In any event, I would not equate deep learning and differentiable programming (e.g., approaches that I cited like neural Turing machines and neural programming). Deep learning is a component of many differentiable systems. But such systems also build in exactly the sort of elements drawn from symbol-manipulation that I am and have been urging the field to integrate (Marcus, 2001; Marcus, Marblestone, & Dean, 2014a; Marcus, Marblestone, & Dean, 2014b), including memory units and operations over variables, and other systems like routing units stressed in the more recent two essays. If integrating all this stuff into deep learning is what gets us to AGI, my conclusion, quoted below, will have turned out to be dead on:
8. Now vs the future. Maybe deep learning doesn’t work now, but it’s offspring will get us to AGI.
Possibly. I do think that deep learning might play an important role in getting us to AGI, if some key things (many not yet discovered) are added in first.
But what we add matters, and whether it is reasonable to call some future system an instance of deep learning per se, or more sensible to call the ultimate system “a such-and-such that uses deep learning”, depends on where deep learning fits into the ultimate solution. Maybe, for example, in truly adequate natural language understanding systems, symbol-manipulation will play an equally large role as deep learning, or an even larger one.
Part of the issue here is of course terminological. A very good friend recently asked me, why can’t we just call anything that includes deep learning, deep learning, even if it includes symbol-manipulation? Some enhancement to deep learning ought to work. To which I respond: why not call anything that includes symbol-manipulation, symbol-manipulation, even if it includes deep learning?
Gradient-based optimization should get its due, but so should symbol-manipulation, which as yet is the only known tool for systematically representing and achieving high-level abstraction, bedrock to virtually all of the world’s complex computer systems, from spreadsheets to programming environments to operating systems.
Eventually, I conjecture, credit will also be due to the inevitable marriage between the two, hybrid systems that bring together the two great ideas of 20th century AI, symbol-processing and neural networks, both initially developed in the 1950s. Other new tools yet to be invented may be critical as well.
To a true acolyte of deep learning, anything is deep learning, no matter what it’s incorporating, and no matter how different it might be from current techniques. (Viva Imperialism!) If you replaced every transistor in a classic symbolic microprocessor with a neuron, but kept the chip’s logic entirely unchanged, a true deep learning acolyte would still declare victory. But we won’t understand the principles driving (eventual) success if we lump everything together. [Note 6]
9. No machine can extrapolate. It’s not fair to expect a neural network to generalize from even numbers to odd numbers.
Here’s a function, expressed over binary digits.
f(110) = 011;
f(100) = 001;
f(010) = 010.
What’s f(111)?
If you are an ordinary human, you are probably going to guess 111. If you are neural network of the sort I discussed, you probably won’t.
If you have been told many times that hidden layers in neural networks “abstract functions”, you should be a little bit surprised by this.
If you are a human, you might think of the function as something like “reversal”, easily expressed in a line of computer code. If you are a neural network of a certain sort, it’s very hard to learn the abstraction of reversal in a way that extends from evens in that context to odds. But is that impossible? Certainly not if you have a prior notion of an integer. Try another, this time in decimal: f(4) = 8; f(6) = 12. What’s f(5)? None of my human readers would care that questions happens to require you to extrapolate from even numbers to odds; a lot of neural networks would be flummoxed.
Sure, the function is undetermined by the sparse number of examples, like all functions, but it is interesting and important that most people would (amid the infinite range of a priori possible inductions), would alight on f(5)=10.
And just as interesting that most standard multilayer perceptrons, representing the numbers as binary digits, wouldn’t. That’s telling us something, but many people in the neural network community, François Chollet being one very salient exception, don’t want to listen.
Importantly, recognizing that a rule applies to any integer is roughly the same kind of generalization that allows one to recognize that a novel noun that can be used in one context can be used in a huge variety of other contexts. From the first time I hear the word blicket used as an object, I can guess that it will fit into a wide range of frames, like I thought I saw a blicket, I had a close encounter with a blicket, and exceptionally large blickets frighten me, etc. And I can both generate and interpret such sentences, without specific further training. It doesn’t matter whether blicket is or not similar in (for example) phonology to other words I have heard, nor whether I pile on the adjectives or use the word as a subject or an object. If most machine learning [ML] paradigms have a problem with this, we should have problem with most ML paradigms.
Am I being “fair”? Well, yes, and no. It’s true that I am asking neural networks to do something that violates their assumptions.
A neural network advocate might, for example, say, “hey wait a minute, in your reversal example, there are three dimensions in your input space, representing the left binary digit, the middle binary digit, and rightmost binary digit. The rightmost binary digit has only been a zero in the training; there is no way a network can know what to do when you get to one in that position.” For example, Vincent Lostenlan, a postdoc at Cornell, said
Dietterich, made essentially the same point, more concisely:
But although both are right about why odds-and-evens are (in this context) hard for deep learning, they are both wrong about the larger issues for three reasons.
First, it can’t be that people can’t extrapolate. You just did, in two different examples, at the top of this section. Paraphrasing Chico Marx. who are you going to believe, me or your own eyes?
To someone immersed deeply — perhaps too deeply — in contemporary machine learning, my odds-and-evens problem seems unfair because a certain dimension (the one which contains the value of 1 in the rightmost digit) hasn’t been illustrated in the training regime. But when you, a human, look at my examples above, you will not be stymied by this particular gap in the training data. You won’t even notice it, because your attention is on higher-level regularities.
People routinely extrapolate in exactly the fashion that I have been describing, like recognizing string reversal from the three training examples I gave above. In a technical sense, that is extrapolation, and you just did it. In The Algebraic Mind I referred to this specific kind of extrapolation as generalizing universally quantified one-to-one mappings outside of a space of training examples. As a field we desperately need a solution to this challenge, if we are ever to catch up to human learning — even if it means shaking up our assumptions.
Now, it might reasonably be objected that it’s not a fair fight: humans manifestly depend on prior knowledge when they generalize such mappings. (In some sense, Dieterrich proposed this objection later in his tweet stream.)
True enough. But in a way, that’s the point: neural networks of a certain sort don’t have a good way of incorporating the right sort of prior knowledge in the place. It is precisely because those networks don’t have a way of incorporating prior knowledge like “many generalizations hold for all elements of unbounded classes” or “odd numbers leave a remainder of one when divided by two” that neural networks that lack operations over variables fail. The right sort of prior knowledge that would allow neural networks to acquire and represent universally quantified one-to-one mappings. Standard neural networks can’t represent such mappings, except in certain limited ways. (Convolution is a way of building in one particular such mapping, prior to learning).
Second, saying that no current system (deep learning or otherwise) can extrapolate in the way that I have described is no excuse; once again other architectures may be in the choppy water, but that doesn’t mean we shouldn’t be trying to swim to shore. If we want to get to AGI, we have to solve the problem.
(Put differently: yes, one could certainly hack together solutions to get deep learning to solve my specific number series problems, by, for example, playing games with the input encoding schemes; the real question, if we want to get to AGI, is how to have a system learn the sort of generalizations I am describing in a general way.)
Third, the claim that no current system can extrapolate turns out to be, well, false; there are already ML systems that can extrapolate at least some functions of exactly the sort I described, and you probably own one: Microsoft Excel, its Flash Fill function in particular (Gulwani, 2011). Powered by a very different approach to machine learning, it can do certain kinds of extrapolation, albeit in a narrow context, by the bushel, e.g., try typing the (decimal) digits 1, 11, 21 in a series of rows and see if the system can extrapolate via Flash Fill to the eleventh item in the sequence (101).
Spoiler alert, it can, in exactly the same way as you probably would, even though there were no positive examples in the training dimension of the hundreds digit. The systems learns from examples the function you want and extrapolates it. Piece of cake. Can any deep learning system do that with three training examples, even with a range of experience on other small counting functions, like 1, 3, 5, .... and 2, 4, 6 ....?
Well maybe, but only the ones that are likely do so are likely to be hybrids that build in operations over variables, which are quite different from the sort of typical convolutional neural networks that most people associate with deep learning.
Putting all this very differently, one crude way to think about where we are with most ML systems that we have today [Note 7] is that they just aren’t designed to think “outside the box”; they are designed to be awesome interpolators inside the box. That’s fine for some purposes, but not others. Humans are better at thinking outside boxes than contemporary AI; I don’t think anyone can seriously doubt that.
But that kind of extrapolation, that Microsoft can do in a narrow context, but that no machine can do with human-like breadth, is precisely what machine learning engineers really ought to be working on, if they want to get to AGI.
10. Everybody in the field already knew this. There is nothing new here.
Well, certainly not everybody; as noted, there were many critics who think we still don’t know the limits of deep learning, and others who believe that there might be some, but none yet discovered.
That said, I never said that any of my points was entirely new; for virtually all, I cited other scholars, who had independently reached similar conclusions.
11. Marcus failed to cite X.
Definitely true; the literature review was incomplete. One favorite among the papers I failed to cite is Shanahan’s Deep Symbolic Reinforcement (Garnelo, Arulkumaran, & Shanahan, 2016); I also can’t believe I forgot Richardson and Domingos’ (2006) Markov Logic Networks. I also wish I had cited Evans and Edward Grefenstette (2017), a great paper from DeepMind. And Smolensky’s tensor calculus work (Smolensky et al., 2016). And work on inductive programming in various forms (Gulwani et al., 2015) and probabilistic programming, too, by Noah Goodman (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2012) All seek to bring rules and networks close to together.
And older stuff by pioneers like Jordan Pollack (Smolensky et al., 2016). And Forbus and Gentner’s (Falkenhainer, Forbus, & Gentner, 1989) and Hofstadter and Mitchell’s (1994) work on analogy; and many others. I am sure there is a lot more I could and should have cited.
Overall, I tried to be representative rather than fully comprehensive, but I still could have done better. #chagrin.
12. Marcus has no standing in the field; he isn’t a practitioner; he is just a critic.
Hesitant to raise this one, but it came up in all kinds of different responses, even from the mouths of certain well-known professionals. As Ram Shankar noted, “As a community, we must circumscribe our criticism to science and merit based arguments.” What really matters is not my credentials (which I believe do in fact qualify me to write) but the validity of the arguments.
Either my arguments are correct, or they are not.
[Still, for those who are curious, I supply an optional mini-history of some of my relevant credentials in Note 8 at the end.]
13. Re: hierarchy, what about Socher’s tree-RNNs?
I have written to him, in hopes of having a better understanding of its current status. I’ve also privately pushed several other teams towards trying out tasks like Lake and Baroni (2017) presented.
Pengfei et al (2017) offers some interesting discussion.
14. You could have been more critical of deep learning.
Nobody quite said that, not in exactly those words, but a few came close, generally privately.
One colleague for example pointed out that there may be some serious errors of future forecasting around
The same colleague added
Another colleague, ML researcher and author Pedro Domingos, pointed out still other shortcomings of current deep learning methods that I didn’t mention:
Like other flexible supervised learning methods, deep learning systems can be unstable in the sense that slightly changing the training data may result in large changes in the resulting model.
As Domingos notes, there’s no guarantee this sort of rise and decline won’t repeat itself. Neural networks have risen and fallen several times before, all the way back to Rosenblatt’s first Perceptron in 1957. We shouldn’t mistake cyclical enthusiasm for a complete solution to intelligence, which still seems (to me, anyway) to be decades away.
If we want to reach AGI, we owe it to ourselves to be as keenly aware of challenges we face as we are of our successes.
2. There are other problems too in relying on these 1,000 image sets. For example, in reading a draft of this paper, Melanie Mitchell pointed me to important recent work by Loghmani and colleague (2017) on assessing how deep learning does in the real world. Quoting from the abstract, the paper “analyzes the transferability of deep representations from Web images to robotic data [in the wild]. Despite the promising results obtained with [representations developed from Web image], the experiments demonstrate that object classification with real-life robotic data is far from being solved.”
3. And that literature is growing fast. In late December there was a paper about fooling deep nets into mistaking a pair of skiers for a dog [https://arxiv.org/pdf/1712.09665.pdf] and another on a general-purpose tool for building real-world adversarial patches: https://arxiv.org/pdf/1712.09665.pdf. (See also https://arxiv.org/abs/1801.00634.) It’s frightening to think how vulnerable deep learning can be real-world contexts.
And for that matter consider Filip Pieknewski’s blog on why photo-trained deep learning systems have trouble transferring what they have learned to line drawings, https://blog.piekniewski.info/2016/12/29/can-a-deep-net-see-a-cat/. Vision is not as solved as many people seem to think.
4. As I will explain in the forthcoming paper, AlphaGo is not actually a pure [deep] reinforcement learning system, although the quoted passage presented it as such. It’s really more of a hybrid, with important components that are driven by symbol-manipulating algorithms, along with a well engineered deep-learning component.
5. AlphaZero, by the way, isn’t unsupervised, it’s self-supervised, using self-play and simulation as a way of generating supervised data; I will have a lot more to say about that system in a forthcoming paper.
6. Consider, for example Google Search, and how one might understand it. Google has recently added in a deep learning algorithm, RankBrain, to the wide array of algorithms it uses for search. And Google Search certainly takes in data and knowledge and processes them hierarchically (which according to Maher Ibrahim is all you need to count as being deep learning). But, realistically, deep learning is just one cue among many; the knowledge graph component, for example, is based instead primarily on classical AI notions of traversing ontologies. By any reasonable measure Google Search is a hybrid, with deep learning as just one strand among many.
Calling Google Search as a whole. “a deep learning system” would be grossly misleading, akin to relabeling carpentry “screwdrivery”, just because screwdrivers happen to be involved.
7. Important exceptions include inductive logic programming, inductive function programming (the brains behind Microsoft’s Flash Fill) and neural programming. All are making some progress here; some of these even include deep learning, but they also all include structured representations and operations over variables among their primitive operations; that’s all I am asking for.
8. My AI experiments begin in adolescence, with, among other thing, a Latin-English translator that I coded in the programming language Logo. In graduate school, studying with Steven Pinker, I explored the relation between language acquisition, symbolic rules, and neural networks. (I also owe a debt to my undergraduate mentor Neil Stillings.) The child language data I gathered (Marcus et al., 1992) for my dissertation have been cited hundreds of times, and were the most frequently-modeled data in the 90’s debate about neural networks and how children learned language.
In the late 1990’s I discovered some specific, replicable problems with multilayer perceptrons, (Marcus, 1998b; Marcus, 1998a)); based on those observation, I designed a widely-cited experiment. published in Science (Marcus, Vijayan, Bandi Rao, & Vishton, 1999), that showed that young infants could extract algebraic rules, contra Jeff Elman’s (1990) then popular neural network. All of this culminated in a 2001 MIT Press book (Marcus, 2001), which lobbied for a variety of representational primitives, some of which have begun to pop up in recent neural networks; in particular that the use of operations over variables in the new field of differentiable programming (Daniluk, Rocktäschel, Welbl, & Riedel, 2017; Graves et al., 2016) owes something to the position outlined in that book. There was a strong emphasis on having memory records, as well, which can be seen in the memory networks being developed e.g., at Facebook (Bordes, Usunier, Chopra, & Weston, 2015).) The next decade saw me work on other problems including innateness (Marcus, 2004) (which I will discuss at length in the forthcoming piece about AlphaGo) and evolution (Marcus, 2004; Marcus, 2008), I eventually returned to AI and cognitive modeling, publishing a 2014 article on cortical computation in Science (Marcus, Marblestone, & Dean, 2014) that also anticipates some of what is now happening in differentiable programming.
More recently, I took a leave from academia to found and lead a machine learning company in 2014; by any reasonable measure that company was successful, acquired by Uber roughly two years after founding. As co-founder and CEO I put together a team of some of the very best machine learning talent in the world, including Zoubin Ghahramani, Jeff Clune, Noah Goodman, Ken Stanley and Jason Yosinski, and played a pivotal role in developing our core intellectual property and shaping our intellectual mission. (A patent is pending, co-written by Zoubin Ghahramani and myself.)
Although much of what we did there remains confidential, now owned by Uber, and not by me, I can say that a large part of our efforts were addressed towards integrating deep learning with our own techniques, which gave me a great deal of familiarity with joys and tribulations of Tensorflow and vanishing (and exploding) gradients. We aimed for state-of-the-art results (sometimes successfully, sometimes not) with sparse data, using hybridized deep learning systems on a daily basis.
Bordes, A., Usunier, N., Chopra, S., & Weston, J. (2015). Large-scale Simple Question Answering with Memory Networks. arXiv.
Daniluk, M., Rocktäschel, T., Welbl, J., & Riedel, S. (2017). Frustratingly Short Attention Spans in Neural Language Modeling. arXiv.
Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2)(2), 179–211.
Evans, R., & Grefenstette, E. (2017). Learning Explanatory Rules from Noisy Data. arXiv, cs.NE.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1)(1), 1–63.
Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 5, 826–834.
Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards Deep Symbolic Reinforcement Learning. arXiv, cs.AI.
Goodman, N., Mansinghka, V., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2012). Church: a language for generative models. arXiv preprint arXiv:1206.3255.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A. et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626)(7626), 471–476.
Gulwani, S. (2011). Automating string processing in spreadsheets using input-output examples. dl.acm.org, 46(1)(1), 317–330.
Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., & Zorn, B. (2015). Inductive programming meets the real world. Communications of the ACM, 58(11)(11), 90–99.
Hofstadter, D. R., & Mitchell, M. (1994). The copycat project: A model of mental fluidity and analogy-making. Advances in connectionist and neural computation theory, 2(31–112)(31–112), 29–30.
Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the Limitation of Convolutional Neural Networks in Recognizing Negative Images. arXiv, cs.CV.
Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3)(3), 574–591.
Lake, B. M., & Baroni, M. (2017). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. arXiv.
Loghmani, M. R., Caputo, B., & Vincze, M. (2017). Recognizing Objects In-the-wild: Where Do We Stand? arXiv, cs.RO.
Marcus, G. F. (1998a). Rethinking eliminative connectionism. Cogn Psychol, 37(3)(3), 243 — 282.
Marcus, G. F. (1998b). Can connectionism save constructivism? Cognition, 66(2)(2), 153 — 182.
Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press.
Marcus, G. F. (2004). The Birth of the Mind : how a tiny number of genes creates the complexities of human thought. Basic Books.
Marcus, G. F. (2008). Kluge : the haphazard construction of the human mind. Boston : Houghton Mifflin.
Marcus, G. (2018). Deep Learning: A Critical Appraisal. arXiv.
Marcus, G.F., Marblestone, A., & Dean, T. (2014a). The atoms of neural computation. Science, 346(6209)(6209), 551 — 552.
Marcus, G. F., Marblestone, A. H., & Dean, T. L. (2014b). Frequently Asked Questions for: The Atoms of Neural Computation. Biorxiv (arXiv), q-bio.NC.
Marcus, G. F. (2001). The Algebraic Mind: Integrating Connectionism and cognitive science. Cambridge, Mass.: MIT Press.
Marcus, G. F., Pinker, S., Ullman, M., Hollander, M., Rosen, T. J., & Xu, F. (1992). Overregularization in language acquisition. Monogr Soc Res Child Dev, 57(4)(4), 1–182.
Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Rule learning by seven-month-old infants. Science, 283(5398)(5398), 77–80.
Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv, cs.CV.
Pengfei, L., Xipeng, Q., & Xuanjing, H. (2017). Dynamic Compositional Neural Networks over Tree Structure IJCAI. Proceedings from Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv, cs.LG.
Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine learning, 62(1)(1), 107–136.
Sabour, S., dffsdfdsf, N., & Hinton, G. E. (2017). Dynamic Routing Between Capsules. arXiv, cs.CV.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676)(7676), 354–359.
Smolensky, P., Lee, M., He, X., Yih, W.-t., Gao, J., & Deng, L. (2016). Basic Reasoning with Tensor Product Representations. arXiv, cs.AI.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
CEO & Founder, Geometric Intelligence (acquired by Uber). Professor of Psychology and Neural Science, NYU. Freelancer for The New Yorker & New York Times.
|
Sarthak Jain | 3.9K | 10 | https://medium.com/nanonets/how-to-easily-detect-objects-with-deep-learning-on-raspberrypi-225f29635c74?source=tag_archive---------1---------------- | How to easily Detect Objects with Deep Learning on Raspberry Pi | Disclaimer: I’m building nanonets.com to help build ML with less data and no hardware
The raspberry pi is a neat piece of hardware that has captured the hearts of a generation with ~15M devices sold, with hackers building even cooler projects on it. Given the popularity of Deep Learning and the Raspberry Pi Camera we thought it would be nice if we could detect any object using Deep Learning on the Pi.
Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house.
20M years of evolution have made human vision fairly evolved. The human brain has 30% of it’s Neurons work on processing vision (as compared with 8 percent for touch and just 3 percent for hearing). Humans have two major advantages when compared with machines. One is stereoscopic vision, the second is an almost infinite supply of training data (an infant of 5 years has had approximately 2.7B Images sampled at 30fps).
To mimic human level performance scientists broke down the visual perception task into four different categories.
Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box.
Object detection is of significant practical importance and has been used across a variety of industries. Some of the examples are mentioned below:
Object Detection can be used to answer a variety of questions. These are the broad categories:
There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments).
Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below.
You can read the full paper here: https://pjreddie.com/media/files/papers/yolo_1.pdf
For this task, you probably need a few 100 Images per Object. Try to capture data as close to the data you’re going to finally make predictions on.
Draw bounding boxes on the images. You can use a tool like labelImg. You will typically need a few people who will be working on annotating your images. This is a fairly intensive and time consuming task.
You can read more about this at medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab. You need a pretrained model so you can reduce the amount of data required to train. Without it, you might need a few 100k images to train the model.
You can find a bunch of pretrained models here
The process of training a model is unnecessarily difficult to simplify the process we created a docker image would make it easy to train.
To start training the model you can run:
The docker image has a run.sh script that can be called with the following parameters
You can find more details at:
To train a model you need to select the right hyper parameters.
Finding the right parameters
The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters.
Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile)
Small devices like Mobile Phones and Rasberry PI have very little memory and computation power.
Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too).
Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs.
Why Quantize?
Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model.
The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%.
Code for Quantization:
You need the Raspberry Pi camera live and working. Then capture a new Image
For instructions on how to install checkout this link
Download Model
Once your done training the model you can download it on to your pi. To export the model run:
Then download the model onto the Raspberry Pi.
Install TensorFlow on the Raspberry Pi
Depending on your device you might need to change the installation a little
Run model for predicting on the new Image
The Raspberry Pi has constraints on both Memory and Compute (a version of Tensorflow Compatible with the Raspberry Pi GPU is still not available). Therefore, it is important to benchmark how much time do each of the models take to make a prediction on a new image.
We have removed the need to annotate Images, we have expert annotators who will annotate your images for you.
We automatically train the best model for you, to achieve this we run a battery of model with different parameters to select the best for your data
NanoNets is entirely in the cloud and runs without using any of your hardware. Which makes it much easier to use.
Since devices like the Raspberry Pi and mobile phones were not built to run complex compute heavy tasks, you can outsource the workload to our cloud which does all of the compute for you
Get your free API Key from http://app.nanonets.com/user/api_key
Collect the images of object you want to detect. You can annotate them either using our web UI (https://app.nanonets.com/ObjectAnnotation/?appId=YOUR_MODEL_ID) or use open source tool like labelImg. Once you have dataset ready in folders, images (image files) and annotations (annotations for the image files), start uploading the dataset.
Once the Images have been uploaded, begin training the Model
The model takes ~2 hours to train. You will get an email once the model is trained. In the meanwhile you check the state of the model
Once the model is trained. You can make predictions using the model
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Founder & CEO @ NanoNets.com
NanoNets: Machine Learning API
|
Favio Vázquez | 3.3K | 14 | https://towardsdatascience.com/a-weird-introduction-to-deep-learning-7828803693b0?source=tag_archive---------2---------------- | A “weird” introduction to Deep Learning – Towards Data Science | There are amazing introductions, courses and blog posts on Deep Learning. I will name some of them in the resources sections, but this is a different kind of introduction.
But why weird? Maybe because it won’t follow the “normal” structure of a Deep Learning post, where you start with the math, then go into the papers, the implementation and then to applications.
It will be more close to the post I did before about “My journey into Deep Learning”, I think telling a story can be much more helpful than just throwing information and formulas everywhere. So let’s begin.
NOTE: There’s a companion webinar to this article. Find it here:
Sometimes is important to have a written backup of your thoughts. I tend to talk a lot, and be present in several presentations and conference, and this is my way of contributing with a little knowledge to everyone.
Deep Learning (DL)is such an important field for Data Science, AI, Technology and our lives right now, and it deserves all of the attention is getting. Please don’t say that deep learning is just adding a layer to a neural net, and that’s it, magic! Nope. I’m hoping that after reading this you have a different perspective of what DL is.
I just created this timeline based on several papers and other timelines with the purpose of everyone seeing that Deep Learning is much more than just Neural Networks. There has been really theoretical advances, software and hardware improvements that were necessary for us to get to this day. If you want it just ping me and I’ll send it to you. (Find my contact in the end of the article).
Deep Learning has been around for quite a while now. So why it became so relevant so fast the last 5–7 years?
As I said before, until the late 2000s, we were still missing a reliable way to train very deep neural networks. Nowadays, with the development of several simple but important theoretical and algorithmic improvements, the advances in hardware (mostly GPUs, now TPUs), and the exponential generation and accumulation of data, DL came naturally to fit this missing spot to transform the way we do machine learning.
Deep Learning is an active field of research too, nothing is settle or closed, we are still searching for the best models, topology of the networks, best ways to optimize their hyperparameters and more. Is very hard, as any other active field on science, to keep up to date with the investigation, but it’s not impossible.
A side note on topology and machine learning (Deep Learning with Topological Signatures by Hofer et al.):
Luckily for us, there are lots of people helping understand and digest all of this information through courses like the Andrew Ng one, blog posts and much more.
This for me is weird, or uncommon because normally you have to wait for sometime (sometime years) to be able to digest difficult and advance information in papers or research journals. Of course, most areas of science are now really fast too to get from a paper to a blog post that tells you what yo need to know, but in my opinion DL has a different feel.
We are working with something that is very exciting, most people in the field are saying that the last ideas in the papers of deep learning (specifically new topologies and configurations for NN or algorithms to improve their usage) are the best ideas in Machine Learning in decades (remember that DL is inside of ML).
I’ve used the word learning a lot in this article so far. But what is learning?
In the context of Machine Learning, the word “learning” describes an automatic search process for better representations of the data you are analyzing and studying (please have this in mind, is not making a computer learn).
This is a very important word for this field, REP-RE-SEN-TA-TION. Don’t forget about it. What is a representation? It’s a way to look at data.
Let me give you an example, let’s say I tell you I want you to drive a line that separates the blue circles from the green triangles for this plot:
So, if you want to use a line this is what the author says:
This is impossible if we remember the concept of a line:
So is the case lost? Actually no. If we find a way of representing this data in a different way, in a way we can draw a straight line to separate the types of data. This is somethinkg that math taught us hundreds of years ago. In this case what we need is a coordinate transformation, so we can plot or represent this data in a way we can draw this line. If we look the polar coordinate transformation, we have the solution:
And that’s it now we can draw a line:
So, in this simple example we found and chose the transformation to get a better representation by hand. But if we create a system, a program that can search for different representations (in this case a coordinate change), and then find a way of calculating the percentage of categories being classified correctly with this new approach, in that moment we are doing Machine Learning.
This is something very important to have in mind, deep learning is representation learning using different kinds of neural networks and optimize the hyperparameters of the net to get (learn)the best representation for our data.
This wouldn’t be possible without the amazing breakthroughs that led us to the current state of Deep Learning. Here I name some of them:
Learning representations by back-propagating errors by David E. Rumelhart, Geoffrey E. Hinton & Ronald J. Williams.
A theoretical framework for Back-Propagation by Yann Lecun.
2. Idea: Better initialization of the parameters of the nets. Something to remember: The initialization strategy should be selected according to the activation function used (next).
3. Idea: Better activation functions. This mean, better ways of approximating the functions faster leading to faster training process.
4. Idea: Dropout. Better ways of preventing overfitting and more.
Dropout: A Simple Way to Prevent Neural Networks from Overfitting, a great paper by Srivastava, Hinton and others.
5. Idea: Convolutional Neural Nets (CNNs).
Gradient based learning applied to document recognition by Lecun and others
ImageNet Classification with Deep Convolutional Neural Networks by Krizhevsky and others.
6. Idea: Residual Nets (ResNets).
7. Idea: Region Based CNNs. Used for object detection and more.
8. Idea: Recurrent Neural Networks (RNNs) and LSTMs.
BTW: It was shown by Liao and Poggio (2016) that ResNets == RNNs, arXiv:1604.03640v1.
9. Idea: Generative Adversarial Networks (GANs).
10. Idea: Capsule Networks.
And there are many others but I think those are really important theoretical and algorithmic breakthroughs that are changing the world, and that gave momentum for the DL revolution.
It’s not easy to get started but I’ll try my best to guide you through this process. Check out this resources, but remember, this is not only watching videos and reading papers, it’s about understanding, programming, coding, failing and then making it happen.
-1. Learn Python and R ;)
0. Andrew Ng and Coursera (you know, he doesn’t need an intro):
Siraj Raval: He’s amazing. He has the power to explain hard concepts in a fun and easy way. Follow him on his YouTube channel. Specifically this playlists:
— The Math of Intelligence:
— Intro to Deep Learning:
3. François Chollet’s book: Deep Learning with Python (and R):
3. IBM Cognitive Class:
5. DataCamp:
Deep Learning is one of the most important tools and theories a Data Scientist should learn. We are so lucky to see amazing people creating both research, software, tools and hardware specific for DL tasks.
DL is computationally expensive, and even though there’s been advances in theory, software and hardware, we need the developments in Big Data and Distributed Machine Learning to improve performance and efficiency. Great people and companies are making amazing efforts to join the distributed frameworks (Spark) and DL libraries (TF and Keras).
Here’s an overview:
2. Elephas: Distributed DL with Keras & PySpark:
3. Yahoo! Inc.: TensorFlowOnSpark:
4. CERN Distributed Keras (Keras + Spark) :
5. Qubole (tutorial Keras + Spark):
6. Intel Corporation: BigDL (Distributed Deep Learning Library for Apache Spark)
7. TensorFlow and Spark on Google Cloud:
As I’ve said before one of the most important moments for this field was the creation and open sourced of TensorFlow.
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.
The things you are seeing in the image above are tensor manipulations working with the Riemann Tensor in General Relativity.
Tensors, defined mathematically, are simply arrays of numbers, or functions, that transform according to certain rules under a change of coordinates.
But in the scope of Machine Learning and Deep Learning a tensor is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represents tensors as n-dimensional arrays of base datatypes.
We use heavily tensors all the time in DL, but you don’t need to be an expert in them to use it. You may need to understand a little bit about them so here I list some good resources:
After you check that out, the breakthroughs I mentioned before and the programming frameworks like TensorFlow or Keras (for more on Keras go here), now I think you have an idea of what you need to understand and work with Deep Learning.
But what have we achieved so far with DL? To name a few (from François Chollet book on DL):
And much more. Here’s a list of 30 great and funny applications of DL:
Thinking about the future of Deep Learning (for programming or building applications), I’ll repeat what I said in other posts.
I really think GUIs and AutoML are the near future of getting things done with Deep Learning. Don’t get me wrong, I love coding, but I think the amount of code we will be writing next years will decay.
We cannot spend so many hours worldwide programming the same stuff over and over again, so I think these two features (GUIs and AutoML) will help Data Scientist on getting more productive and solving more problems.
On of the best free platforms for doing these tasks in a simple GUI is Deep Cognition. Their simple drag & drop interface helps you design deep learning models with ease. Deep Learning Studio can automatically design a deep learning model for your custom dataset thanks to their advance AutoML feature with nearly one click.
Here you can learn more about them:
Take a look at the prices :O, it’s freeeee :)
I mean, it’s amazing how fast the development in the area is right now, that we can have simple GUIs to interact with all the hard and interesting concepts I talked about in this post.
One of the things I like about that platform is that you can still code, interact with TensorFlow, Keras, Caffe, MXNet an much more with the command line or their Notebook without installing anything. You have both the notebook and the CLI!
I take my hat off to them and their contribution to society.
Other interesting applications of deep learning that you can try for free or for little cost are (some of them are on private betas):
Thanks for reading this weird introduction to Deep Learning. I hope it helped you getting started in this amazing area, or maybe just discover something new.
If you have questions just add me on LinkedIn and we’ll chat there:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data scientist, physicist and computer engineer. Love sharing ideas, thoughts and contributing to Open Source in Machine Learning and Deep Learning ;).
Sharing concepts, ideas, and codes.
|
Oleksandr Savsunenko | 5.5K | 4 | https://hackernoon.com/the-new-neural-internet-is-coming-dda85b876adf?source=tag_archive---------3---------------- | The New Neural Internet is Coming – Hacker Noon | How it all began / The Landscape
Think of the typical and well-studied neural networks (such as image classifier) as a left hemisphere of the neural network technology. With this in mind, it is easy to understand what is Generative Adversarial Network. It is a kind of right hemisphere — the one that is claimed to be responsible for creativity.
The Generative Adversarial Networks (GANs) are the first step of neural networks technology learning creativity. Typical GAN is a neural network trained to generate images on the certain topic using an image dataset and some random noise as a seed. Up until now images created by GANs were of low quality and limited in resolution. Recent advances by NVIDIA showed that it is within a reach to generate photorealistic images in high-resolution and they published the technology itself in open-access.
There is a plethora of GANs types of various complexity, architectures, and strange acronyms. We are mostly interested here in conditional GANs and variational autoencoders. Conditional GANs are capable of not just mimicking the broad type of images as “bedroom”, “face”, “dog” but also dive into more specific categories. For example, the Text2Image network is capable of translation textual image description into the image itself.
By varying random seed that is concatenated to the “meanings” vector we are able to produce an infinite number of birds image, matching description.
Let’s just close your eyes and see the world in 2 years. Companies like NVIDIA will push GAN technology to industry-ready level, same as they did with celebrities faces generation. This means, that a GAN will be able to generate any image, on-demand, on-the-fly based on textual (for example) description. This will render obsolete a number of photography and design related industries. Here’s how this will work.
Again, the network is able to generate an infinite number of images by varying random seed.
And here’s the scary part. Such a network can receive not only description of the target object it needs to generate, but also a vector describing you — the ad consumer. This ad can have a very deep description of your personality, web browsing history, recent transactions, and geolocation, so the GAN will generate one-time, unique and, that fits you perfectly. CTR is going sky high.
By measuring your reactions the network will adapt and make ads targeting you more and more precisely, hitting your soft spots.
So, at the end of the day, we are going to see a fully personalized content everywhere on the Internet.
Everyone will see fully custom versions of all content, that is adapted to the consumer based on his lifestyle, opinions, and history. We all witnessed arousal of this Bubble pattern after latest USA elections and it’s gonna be getting worse. GANs will able to target content precisely to you with no limitations of the medium — starting from image ads and up to complex opinions, tread and publications, generated by machines. This will create a constant feedback loop, improving based on your interactions. And there is going to be a competition of different GANs between each other. Kind of a fully automated war of phycological manipulations, having humanity as a battlefield. The driving force behind this trend is extremely simple — profits.
And this is not a scary doomsday scenario, this actually is happening today.
I have no idea. But surely we need few things: broad public discussions about this technology inevitable arrival and a backup plan to stop it. So, it’s better to start thinking now — how we can fight this process and benefit from it at the same time.
We are not there yet due to some technical limitation. Up until recently images generated by GANs were just of bad quality and easily spotted as fake. NVIDIA showed that it is actually doable to generate 1024x1024 extremely real faces. To move things forward we would need faster and bigger GPUs, more theoretical studies on GAN, more smart hacks around GAN training, more labeled datasets, etc.
Please, notice — we don’t need new power sources, quantum processors (but they can help), general AI to reach this point or some other purely theoretical new cool things. All we need is within a reach of few years and likely big corp already have this kind of resources available.
Also, we will need smarter neural networks. I am definitely looking for progress in capsules approach by Hinton et al. And of course, we will be the first to implement this in super-resolution technology, that should heavily benefit from GAN progress.
Let me know what you think.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Machine learning engineer, doer/maker/dreamer, father
how hackers start their afternoons.
|
Max Pechyonkin | 3.4K | 8 | https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a?source=tag_archive---------4---------------- | Stochastic Weight Averaging — a New Way to Get State of the Art Results in Deep Learning | In this article, I will discuss two interesting recent papers that provide an easy way to improve performance of any given neural network by using a smart way to ensemble. They are:
Additional prerequisite reading that will make context of this post much more easy to understand:
Traditional ensembling combines several different models and makes them predict on the same input. Then some way of averaging is used to determine the final prediction of the ensemble. It can be simple voting, an average or even another model that learns to predict correct value or label based on the inputs of models in the ensemble. Ridge regression is one particular way of combining several predictions which is used by Kaggle-winning machine learning practitioners.
When applied in deep learning, ensembling can be used to combine predictions of several neural networks to produce one final prediction. Usually it is a good idea to use neural networks of different architectures in an ensemble, because they will likely make mistakes on different training samples and therefore the benefit of ensembling will be larger.
However, you can also ensemble models with the same architecture and it will give surprisingly good results. One very cool trick exploiting this approach was proposed in the snapshot ensembling paper. The authors take weights snapshot while training the same network and then after training create an ensemble of nets with the same architecture but different weights. This allows to improve test performance, and it is a very cheap way too because you just train one model once, just saving weights from time to time.
You can refer to this awesome post for more details. If you aren’t yet using cyclical learning rates, then you definitely should, as it becomes the standard state-of-the art training technique that is very simple, not computationally heavy and provides significant gains at almost no additional cost.
All of the examples above are ensembles in the model space, because they combine several models and then use models’ predictions to produce the final prediction.
In the paper that I am discussing in this post, however, the authors propose to use a novel ensembling in the weights space. This method produces an ensemble by combining weights of the same network at different stages of training and then uses this model with combined weights to make predictions. There are 2 benefits from this approach:
Let’s see how it works. But first we need to understand some important facts about loss surfaces and generalizable solutions.
The first important insight is that a trained network is a point in multidimensional weight space. For a given architecture, each distinct combination of network weights produces a separate model. Since there are infinitely many combinations of weights for any given architecture, there will be infinitely many solutions. The goal of training of a neural network is to find a particular solution (point in the weight space) that will provide low value of the loss function both on training and testing data sets.
During training, by changing weights, training algorithm changes the network and travel in the weight space. Gradient descent algorithm travels on a loss plane in this space where plane elevation is given by the value of the loss function.
It is very hard to visualize and understand the geometry of multidimensional weight space. At the same time, it is very important to understand it because stochastic gradient descent essentially traverses a loss surface in this highly multidimensional space during training and tries to find a good solution — a “point” on the loss surface where loss value is low. It is known that such surfaces have many local optima. But it turns out that not all of them are equally good.
One metric that can distinguish a good solution from a bad one is its flatness. The idea being that training data set and testing data set will produce similar but not exactly the same loss surfaces. You can imagine that a test surface will be shifted a bit relative to the train surface. For a narrow solution, during test time, a point that gave low loss can have a large loss because of this shift. This means that this “narrow” solution did not generalize well — training loss is low, while testing loss is large. On the other hand, for a “wide” and flat solution, this shift will lead to training and testing loss being close to each other.
I explained the difference between narrow and wide solutions because the new method which is the focus of this post leads to nice and wide solutions.
Initially, SGD will make a big jump in the weight space. Then, as the learning rate gets smaller due to cosine annealing, SGD will converge to some local solution and the algorithm will take a “snapshot” of the model by adding it to the ensemble. Then the rate is reset to high value again and SGD takes a large jump again before converging to some different local solution.
Cycle length in the snapshot ensembling approach is 20 to 40 epochs. The idea of long learning rate cycles is to be able to find sufficiently different models in the weight space. If the models are too similar, then predictions of the separate networks in the ensemble will be too close and the benefit of ensembling will be negligible.
Snapshot ensembling works really well and improves model performance, but Fast Geometric Ensembling works even better.
Fast geometric ensembling is very similar to snapshot ensembling, but is has some distinguishing features. It uses linear piecewise cyclical learning rate schedule instead of cosine. Secondly, the cycle length in FGE is much shorter — only 2 to 4 epochs per cycle. At first intuition, the short cycle is wrong because the models at the end of each cycle will be close to each other and therefore ensembling them will not give any benefits. However, as the authors discovered, because there exist connected paths of low loss between sufficiently different models, it is possible to travel along those paths in small steps and the models encountered along will be different enough to allow ensembling them with good results. Thus, FGE shows improvement compared to snapshot ensembles and it takes smaller steps to find the model (which makes it faster to train).
To benefit from both snapshot ensembling or FGE, one needs to store multiple models and then make predictions for all of them before averaging for the final prediction. Thus, for additional performance of the ensemble, one needs to pay with higher amount of computation. So there is no free lunch there. Or is there? This is where the new paper with stochastic weight averaging comes in.
Stochastic weight averaging closely approximates fast geometric ensembling but at a fraction of computational loss. SWA can be applied to any architecture and data set and shows good result in all of them. The paper suggests that SWA leads to wider minima, the benefits of which I discussed above. SWA is not an ensemble in its classical understanding. At the end of training you get one model, but it’s performance beats snapshot ensembles and approaches FGE.
Intuition for SWA comes from empirical observation that local minima at the end of each learning rate cycle tend to accumulate at the border of areas on loss surface where loss value is low (points W1, W2 and W3 are at the border of the red area of low loss in the left panel of figure above). By taking the average of several such points, it is possible to achieve a wide, generalizable solution with even lower loss (Wswa in the left panel of the figure above).
Here is how it works. Instead of an ensemble of many models, you only need two models:
At the end of each learning rate cycle, the current weights of the second model will be used to update the weight of the running average model by taking weighted mean between the old running average weights and the new set of weights from the second model (formula provided in the figure on the left). By following this approach, you only need to train one model, and store only two models in memory during training. For prediction, you only need the running average model and predicting on it is much faster than using ensemble described above, where you use many models to predict and then average results.
Authors of the paper provide their own implementation in PyTorch.
Also, SWA is implemented in the awesome fast.ai library that everyone should be using. And if you haven’t yet seen their course, then follow the links.
You can follow me on Twitter. Let’s also connect on LinkedIn.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Deep Learning
Sharing concepts, ideas, and codes.
|
Daniel Simmons | 3.4K | 8 | https://itnext.io/you-can-build-a-neural-network-in-javascript-even-if-you-dont-really-understand-neural-networks-e63e12713a3?source=tag_archive---------5---------------- | You can build a neural network in JavaScript even if you don’t really understand neural networks | Click here to share this article on LinkedIn »
(Skip this part if you just want to get on with it...)
I should really start by admitting that I’m no expert in neural networks or machine learning. To be perfectly honest, most of it still completely baffles me. But hopefully that’s encouraging to any fellow non-experts who might be reading this, eager to get their feet wet in M.L.
Machine learning was one of those things that would come up from time to time and I’d think to myself “yeah, that would be pretty cool... but I’m not sure that I want to spend the next few months learning linear algebra and calculus.”
Like a lot of developers, however, I’m pretty handy with JavaScript and would occasionally look for examples of machine learning implemented in JS, only to find heaps of articles and StackOverflow posts about how JS is a terrible language for M.L., which, admittedly, it is. Then I’d get distracted and move on, figuring that they were right and I should just get back to validating form inputs and waiting for CSS grid to take off.
But then I found Brain.js and I was blown away. Where had this been hiding?! The documentation was well written and easy to follow and within about 30 minutes of getting started I’d set up and trained a neural network. In fact, if you want to just skip this whole article and just read the readme on GitHub, be my guest. It’s really great.
That said, what follows is not an in-depth tutorial about neural networks that delves into hidden input layers, activation functions, or how to use Tensorflow. Instead, this is a dead-simple, beginner level explanation of how to implement Brain.js that goes a bit beyond the documentation.
Here’s a general outline of what we’ll be doing:
If you’d prefer to just download a working version of this project rather than follow along with the article then you can clone the GitHub repository here.
Create a new directory and plop a good ol’ index.html boilerplate file in there. Then create three JS files: brain.js, training-data.js, and scripts.js (or whatever generic term you use for your default JS file) and, of course, import all of these at the bottom of your index.html file.
Easy enough so far.
Now go here to get the source code for Brain.js. Copy & paste the whole thing into your empty brain.js file, hit save and bam: 2 out of 4 files are finished.
Next is the fun part: deciding what your machine will learn. There are countless practical problems that you can solve with something like this; sentiment analysis or image classification for example. I happen to think that applications of M.L. that process text as input are particularly interesting because you can find training data virtually everywhere and they have a huge variety of potential use cases, so the example that we’ll be using here will be one that deals with classifying text:
We’ll be determining whether a tweet was written by Donald Trump or Kim Kardashian.
Ok, so this might not be the most useful application. But Twitter is a treasure trove of machine learning fodder and, useless though it may be, our tweet-author-identifier will nevertheless illustrate a pretty powerful point. Once it’s been trained, our neural network will be able to look at a tweet that it has never seen before and then be able to determine whether it was written by Donald Trump or by Kim Kardashian just by recognizing patterns in the things they write. In order to do that, we’ll need to feed it as much training data as we can bear to copy / paste into our training-data.js file and then we can see if we can identify ourselves some tweet authors.
Now all that’s left to do is set up Brain.js in our scripts.js file and feed it some training data in our training-data.js file. But before we do any of that, let’s start with a 30,000-foot view of how all of this will work.
Setting up Brain.js is extremely easy so we won’t spend too much time on that but there are a few details about how it’s going to expect its input data to be formatted that we should go over first. Let’s start by looking at the setup example that’s included in the documentation (which I’ve slightly modified here) that illustrates all this pretty well:
First of all, the example above is actually a working A.I (it looks at a given color and tells you whether black text or white text would be more legible on it). Which hopefully illustrates how easy Brain.js is to use. Just instantiate it, train it, and run it. That’s it. I mean, if you inlined the training data that would be 3 lines of code. Pretty cool.
Now let’s talk about training data for a minute. There are two important things to note in the above example other than the overall input: {}, output: {} format of the training data.
First, the data do not need to be all the same length. As you can see on line 11 above, only an R and a B value get passed whereas the other two inputs pass an R, G, and B value. Also, even though the example above shows the input as objects, it’s worth mentioning that you could also use arrays. I mention this largely because we’ll be passing arrays of varying length in our project.
Second, those are not valid RGB values. Every one of them would come out as black if you were to actually use it. That’s because input values have to be between 0 and 1 in order for Brain.js to work with them. So, in the above example, each color had to be processed (probably just fed through a function that divides it by 255 — the max value for RGB) in order to make it work. And we’ll be doing the same thing.
So if we want out neural network to accept tweets (i.e. strings) as an input, we’ll need to run them through an similar function (called encode() below) that will turn every character in a string into a value between 0 and 1 and store it in an array. Fortunately, Javascript has a native method for converting any character into ASCII code called charCodeAt(). So we’ll use that and divide the outcome by the max value for Extended ASCII characters: 255 (we’re using extended ASCII just in case we encounter any fringe cases like é or 1⁄2) which will ensure that we get a value <1.
Also, we’ll be storing our training data as plain text, not as the encoded data that we’ll ultimately be feeding into our A.I. - you’ll thank me for this later. So we’ll need another function (called processTrainingData() below) that will apply the previously mentioned encoding function to our training data, selectively converting the text into encoded characters, and returning an array of training data that will play nicely with Brain.js
So here’s what all of that code will look like (this goes into your ‘scripts.js’ file):
Something that you’ll notice here that wasn’t present in the example from the documentation shown earlier (other than the two helper functions that we’ve already gone over) is on line 20 in the train() function, which saves the trained neural network to a global variable called trainedNet . This prevents us from having to re-train our neural network every time we use it. Once the network is trained and saved to the variable, we can just call it like a function and pass in our encoded input (as shown on line 25 in the execute() function) to use our A.I.
Alright, so now your index.html, brain.js, and scripts.js files are finished. Now all we need is to put something into training-data.js and we’ll be ready to go.
Last but not least, our training data. Like I mentioned, we’re storing all our tweets as text and encoding them into numeric values on the fly, which will make your life a whole lot easier when you actually need to copy / paste training data. No formatting necessary. Just paste in the text and add a new row.
Add that to your ‘training-data.js’ file and you’re done!
Note: although the above example only shows 3 samples from each person, I used 10 of each; I just didn’t want this sample to take up too much space. Of course, your neural network’s accuracy will increase proportionally to the amount of training data that you give it, so feel free to use more or less than me and see how it affects your outcomes
Now, to run your newly-trained neural network just throw an extra line at the bottom of your ‘script.js’ file that calls the execute() function and passes in a tweet from Trump or Kardashian; make sure to console.log it because we haven’t built a UI. Here’s a tweet from Kim Kardashian that was not in my training data (i.e. the network has never encountered this tweet before):
Then pull up your index.html page on localhost, check the console, aaand...
There it is! The network correctly identified a tweet that it had never seen before as originating from Kim Kardashian, with a certainty of 86%.
Now let’s try it again with a Trump tweet:
And the result...
Again, a never-before-seen tweet. And again, correctly identified! This time with 97% certainty.
Now you have a neural network that can be trained on any text that you want! You could easily adapt this to identify the sentiment of an email or your company’s online reviews, identify spam, classify blog posts, determine whether a message is urgent or not, or any of a thousand different applications. And as useless as our tweet identifier is, it still illustrates a really interesting point: that a neural network like this can perform tasks as nuanced as identifying someone based on the way they write.
So even if you don’t go out and create an innovative or useful tool that’s powered by machine learning, this is still a great bit of experience to have in your developer tool belt. You never know when it might come in handy or even open up new opportunities down the road.
Once again, all of this is available in a GitHub repo here:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Web developer, JavaScript enthusiast, boxing fan
ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies.
|
Eugenio Culurciello | 2.8K | 13 | https://towardsdatascience.com/artificial-intelligence-ai-in-2018-and-beyond-e06f05167f9c?source=tag_archive---------6---------------- | Artificial Intelligence, AI in 2018 and beyond – Towards Data Science | These are my opinions on where deep neural network and machine learning is headed in the larger field of artificial intelligence, and how we can get more and more sophisticated machines that can help us in our daily routines.
Please note that these are not predictions of forecasts, but more a detailed analysis of the trajectory of the fields, the trends and the technical needs we have to achieve useful artificial intelligence.
Not all machine learning is targeting artificial intelligences, and there are low-hanging fruits, which we will examine here also.
The goal of the field is to achieve human and super-human abilities in machines that can help us in every-day lives. Autonomous vehicles, smart homes, artificial assistants, security cameras are a first target. Home cooking and cleaning robots are a second target, together with surveillance drones and robots. Another one is assistants on mobile devices or always-on assistants. Another is full-time companion assistants that can hear and see what we experience in our life. One ultimate goal is a fully autonomous synthetic entity that can behave at or beyond human level performance in everyday tasks.
See more about these goals here, and here, and here.
Software is defined here as neural networks architectures trained with an optimization algorithm to solve a specific task.
Today neural networks are the de-facto tool for learning to solve tasks that involve learning supervised to categorize from a large dataset.
But this is not artificial intelligence, which requires acting in the real world often learning without supervision and from experiences never seen before, often combining previous knowledge in disparate circumstances to solve the current challenge.
Neural network architectures — when the field boomed, a few years back, we often said it had the advantage to learn the parameters of an algorithms automatically from data, and as such was superior to hand-crafted features. But we conveniently forgot to mention one little detail... the neural network architecture that is at the foundation of training to solve a specific task is not learned from data! In fact it is still designed by hand. Hand-crafted from experience, and it is currently one of the major limitations of the field. There is research in this direction: here and here (for example), but much more is needed. Neural network architectures are the fundamental core of learning algorithms. Even if our learning algorithms are capable of mastering a new task, if the neural network is not correct, they will not be able to. The problem on learning neural network architecture from data is that it currently takes too long to experiment with multiple architectures on a large dataset. One has to try training multiple architectures from scratch and see which one works best. Well this is exactly the time-consuming trial-and-error procedure we are using today! We ought to overcome this limitation and put more brain-power on this very important issue.
Unsupervised learning —we cannot always be there for our neural networks, guiding them at every stop of their lives and every experience. We cannot afford to correct them at every instance, and provide feedback on their performance. We have our lives to live! But that is exactly what we do today with supervised neural networks: we offer help at every instance to make them perform correctly. Instead humans learn from just a handful of examples, and can self-correct and learn more complex data in a continuous fashion. We have talked about unsupervised learning extensively here.
Predictive neural networks — A major limitation of current neural networks is that they do not possess one of the most important features of human brains: their predictive power. One major theory about how the human brain work is by constantly making predictions: predictive coding. If you think about it, we experience it every day. As you lift an object that you thought was light but turned out heavy. It surprises you, because as you approached to pick it up, you have predicted how it was going to affect you and your body, or your environment in overall.
Prediction allows not only to understand the world, but also to know when we do not, and when we should learn. In fact we save information about things we do not know and surprise us, so next time they will not! And cognitive abilities are clearly linked to our attention mechanism in the brain: our innate ability to forego of 99.9% of our sensory inputs, only to focus on the very important data for our survival — where is the threat and where do we run to to avoid it. Or, in the modern world, where is my cell-phone as we walk out the door in a rush.
Building predictive neural networks is at the core of interacting with the real world, and acting in a complex environment. As such this is the core network for any work in reinforcement learning. See more below.
We have talked extensively about the topic of predictive neural networks, and were one of the pioneering groups to study them and create them. For more details on predictive neural networks, see here, and here, and here.
Limitations of current neural networks — We have talked about before on the limitation of neural networks as they are today. Cannot predict, reason on content, and have temporal instabilities — we need a new kind of neural networks that you can about read here.
Neural Network Capsules are one approach to solve the limitation of current neural networks. We reviewed them here. We argue here that Capsules have to be extended with a few additional features:
Continuous learning — this is important because neural networks need to continue to learn new data-points continuously for their life. Current neural networks are not able to learn new data without being re-trained from scratch at every instance. Neural networks need to be able to self-assess the need of new training and the fact that they do know something. This is also needed to perform in real-life and for reinforcement learning tasks, where we want to teach machines to do new tasks without forgetting older ones.
For more detail, see this excellent blog post by Vincenzo Lomonaco.
Transfer learning — or how do we have these algorithms learn on their own by watching videos, just like we do when we want to learn how to cook something new? That is an ability that requires all the components we listed above, and also is important for reinforcement learning. Now you can really train your machine to do what you want by just giving an example, the same way we humans do every!
Reinforcement learning — this is the holy grail of deep neural network research: teach machines how to learn to act in an environment, the real world! This requires self-learning, continuous learning, predictive power, and a lot more we do not know. There is much work in the field of reinforcement learning, but to the author it is really only scratching the surface of the problem, still millions of miles away from it. We already talked about this here.
Reinforcement learning is often referred as the “cherry on the cake”, meaning that it is just minor training on top of a plastic synthetic brain. But how can we get a “generic” brain that then solve all problems easily? It is a chicken-in-the-egg problem! Today to solve reinforcement learning problems, one by one, we use standard neural networks:
Both these components are obvious solutions to the problem, and currently are clearly wrong, but that is what everyone uses because they are some of the available building blocks. As such results are unimpressive: yes we can learn to play video-games from scratch, and master fully-observable games like chess and go, but I do not need to tell you that is nothing compared to solving problems in a complex world. Imagine an AI that can play Horizon Zero Dawn better than humans... I want to see that!
But this is what we want. Machine that can operate like us.
Our proposal for reinforcement learning work is detailed here. It uses a predictive neural network that can operate continuously and an associative memory to store recent experiences.
No more recurrent neural networks — recurrent neural network (RNN) have their days counted. RNN are particularly bad at parallelizing for training and also slow even on special custom machines, due to their very high memory bandwidth usage — as such they are memory-bandwidth-bound, rather than computation-bound, see here for more details. Attention based neural network are more efficient and faster to train and deploy, and they suffer much less from scalability in training and deployment. Attention in neural network has the potential to really revolutionize a lot of architectures, yet it has not been as recognized as it should. The combination of associative memories and attention is at the heart of the next wave of neural network advancements.
Attention has already showed to be able to learn sequences as well as RNNs and at up to 100x less computation! Who can ignore that?
We recognize that attention based neural network are going to slowly supplant speech recognition based on RNN, and also find their ways in reinforcement learning architecture and AI in general.
Localization of information in categorization neural networks — We have talked about how we can localize and detect key-points in images and video extensively here. This is practically a solved problem, that will be embedded in future neural network architectures.
Hardware for deep learning is at the core of progress. Let us now forget that the rapid expansion of deep learning in 2008–2012 and in the recent years is mainly due to hardware:
And we have talked about hardware extensively before. But we need to give you a recent update! Last 1–2 years saw a boom in the are of machine learning hardware, and in particular on the one targeting deep neural networks. We have significant experience here, and we are FWDNXT, the makers of SnowFlake: deep neural network accelerator.
There are several companies working in this space: NVIDIA (obviously), Intel, Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google, Graphcore, Groq, Huawei, ARM, Wave Computing. All are developing custom high-performance micro-chips that will be able to train and run deep neural networks.
The key is to provide the lowest power and the highest measured performance while computing recent useful neural networks operations, not raw theoretical operations per seconds — as many claim to do.
But few people in the field understand how hardware can really change machine learning, neural networks and AI in general. And few understand what is important in micro-chips and how to develop them.
Here is our list:
About neuromorphic neural networks hardware, please see here.
We talked briefly about applications in the Goals section above, but we really need to go into details here. How is AI and neural network going to get into our daily life?
Here is our list:
I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more...
If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference!
For interesting additional reading, please see:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
I dream and build new technology
Sharing concepts, ideas, and codes.
|
Devin Soni | 5.8K | 4 | https://towardsdatascience.com/spiking-neural-networks-the-next-generation-of-machine-learning-84e167f4eb2b?source=tag_archive---------7---------------- | Spiking Neural Networks, the Next Generation of Machine Learning | Everyone who has been remotely tuned in to recent progress in machine learning has heard of the current 2nd generation artificial neural networks used for machine learning. These are generally fully connected, take in continuous values, and output continuous values. Although they have allowed us to make breakthrough progress in many fields, they are biologically inn-accurate and do not actually mimic the actual mechanisms of our brain’s neurons.
The 3rd generation of neural networks, spiking neural networks, aims to bridge the gap between neuroscience and machine learning, using biologically-realistic models of neurons to carry out computation. A spiking neural network (SNN) is fundamentally different from the neural networks that the machine learning community knows. SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values. The occurrence of a spike is determined by differential equations that represent various biological processes, the most important of which is the membrane potential of the neuron. Essentially, once a neuron reaches a certain potential, it spikes, and the potential of that neuron is reset. The most common model for this is the Leaky integrate-and-fire (LIF) model. Additionally, SNNs are often sparsely connected and take advantage of specialized network topologies.
At first glance, this may seem like a step backwards. We have moved from continuous outputs to binary, and these spike trains are not very interpretable. However, spike trains offer us enhanced ability to process spatio-temporal data, or in other words, real-world sensory data. The spatial aspect refers to the fact that neurons are only connected to neurons local to them, so these inherently process chunks of the input separately (similar to how a CNN would using a filter). The temporal aspect refers to the fact that spike trains occur over time, so what we lose in binary encoding, we gain in the temporal information of the spikes. This allows us to naturally process temporal data without the extra complexity that RNNs add. It has been proven, in fact, that spiking neurons are fundamentally more powerful computational units than traditional artificial neurons.
Given that these SNNs are more powerful, in theory, than 2nd generation networks, it is natural to wonder why we do not see widespread use of them. The main issue that currently lies in practical use of SNNs is that of training. Although we have unsupervised biological learning methods such as Hebbian learning and STDP, there are no known effective supervised training methods for SNNs that offer higher performance than 2nd generation networks. Since spike trains are not differentiable, we cannot train SNNs using gradient descent without losing the precise temporal information in spike trains. Therefore, in order to properly use SNNs for real-world tasks, we would need to develop an effective supervised learning method. This is a very difficult task, as doing so would involve determining how the human brain actually learns, given the biological realism in these networks.
Another issue, that we are much closer to solving, is that simulating SNNs on normal hardware is very computationally-intensive since it requires simulating differential equations. However, neuromorphic hardware such as IBM’s TrueNorth aims to solve this by simulating neurons using specialized hardware that can take advantage of the discrete and sparse nature of neuronal spiking behavior.
The future of SNNs therefore remains unclear. On one hand, they are the natural successor of our current neural networks, but on the other, they are quite far from being practical tools for most tasks. There are some current real-world applications of SNNs in real-time image and audio processing, but the literature on practical applications remains sparse. Most papers on SNNs are either theoretical, or show performance under that of a simple fully-connected 2nd generation network. However, there are many teams working on developing SNN supervised learning rules, and I remain optimistic for the future of SNNs.
Make sure you give this post 50 claps and my blog a follow if you enjoyed this post and want to see more.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
crypto markets, data science ☞ twitter @devin_soni ☞ website https://100.github.io/
Sharing concepts, ideas, and codes.
|
Carlos E. Perez | 3.9K | 7 | https://medium.com/intuitionmachine/neurons-are-more-complex-than-what-we-have-imagined-b3dd00a1dcd3?source=tag_archive---------8---------------- | Surprise! Neurons are Now More Complex than We Thought!! | One of the biggest misconceptions around is the idea that Deep Learning (DL) or Artificial Neural Networks (ANN) mimics biological neurons. At best, ANN mimics a cartoonish version of a 1957 model of a neuron. Anyone claiming Deep Learning is biologically inspired is doing so for marketing purposes or has never bother to read biological literature. Neurons in Deep Learning are essentially mathematical functions that perform a similarity function of its inputs against internal weights. The closer a match is made, the more likely an action is performed (i.e. not sending a signal to zero). There are exceptions to this model (see: Autoregressive networks) however it is general enough to include the perceptron, convolution networks and RNNs.
Neurons are very different from DL constructs. They don’t maintain continuous signals but rather exhibit spiking or event-driven behavior. So, when you hear about “neuromorphic” hardware, then these are inspired on “integrate and spike” neurons. These kinds of system, at best, get a lot of press (see: IBM TrueNorth), but have never been shown to be effective. However, there has been some research work that has shown some progress (see: https://arxiv.org/abs/1802.02627v1). If you ask me, if you truly want to build biologically inspired cognition, then you should at the very least explore systems are not continuous like DL. Biological systems, by nature, will use the least amount of energy to survive. DL systems, in stark contrast, are power hungry. That’s because DL is a brute-force method to achieve cognition. We know it works, we just don’t know how to scale it down.
Jeff Hawkins of Numenta has always lamented that a more biologically-inspired approach is needed. So, in his research in building cognitive machinery, he has architected systems that try to more closely mirror the structure of the neo-cortex. Numenta’s model of a neuron is considerably more elaborate than the Deep Learning model of a neuron as you can see in this graphic:
The team at Numenta is betting on this approach in the hopes of creating something that is more capable than Deep Learning. It hasn’t been proved to be anywhere near successful. They’ve been doing this long enough that the odds of them succeeding are diminishing overtime. Bycontrast, Deep Learning (despite its model of a cartoon neuron) has been shown to be unexpectedly effective in performing all kinds of mind-boggling feats of cognition. Deep Learning is doing something that is extraordinarily correct, we just don’t know exactly what that is!
Unfortunately, we have to throw in a new monkey wrench on all these research. New experiments on the nature of neurons have revealed that biological neurons are even more complex than we have imagined them to be:
In short, there is a lot more going on inside a single neuron than the simple idea of integrate-and-fire. Neurons may not be pure functions dependent on a single parameter (i.e. weight) but rather they are stateful machines. Alternatively, perhaps the weight may not even be single-valued and instead requires complex-valued or maybe higher dimensions. This is all behavior that research has yet to explore and thus we have little understanding to date.
If you think this throws a monkey wrench on our understanding, there’s an even newer discovery that reveals even greater complexity:
What this research reveals is that there is a mechanism for neurons to communicate with each other by sending packages of RNA code. To clarify, these are packages of instructions and not packages of data. There is a profound difference between sending codes and sending data. This implies that behavior from one neuron can change the behavior of another neuron; not through observation, but rather through injection of behavior.
This code exchange mechanism hints at the validity of my earlier conjecture: “Are biological brains made of only discrete logic?”
Experimental evidence reveals a new reality. Even at the smallest unit of our cognition, there is a kind of conversational cognition that is going on between individual neurons that modifies each other’s behavior. Thus, not only are neurons machines with state, they are also machines with an instruction set and a way to send code to each other. I’m sorry, but this is just another level of complexity.
There are two obvious ramifications of these experimental discoveries. The first is that our estimates of the computational capabilities of the human brain are likely to be at least an order of magnitude off. The second is that research will begin in earnest to explore DL architectures with more complex internal node (or neuron) structures.
If we were to make the rough argument that a single neuron performs a single operation, the total capacity of the human brain is measured at 38 peta operations per second. If were then to assume a DL model of operations being equal to floating point operations then a 38 petaflops system would be equivalent in capability. The top ranked supercomputer, Sunway Taihulight from China is estimated at 125 petaflops. However, let’s say the new results reveal 10x more computation, then the number should be 380 petaflops and we perhaps have breathing room until 2019. What is obvious, however, is that biological brains actually perform much more cognition with less computation.
The second consequences it that it’s now time to get back to the drawing board and begin to explore more complex kinds of neurons. The more complex kinds we’ve seen to date are the ones derived from LSTM. Here is the result of a brute force architectural search for LSTM-like neurons:
It’s not clear why these more complex LSTM are more effective. Only the architectural search algorithm knows but it can’t explain itself.
There is newly released paper that explores more complex hand-engineered LSTMs:
that reveals measurable improvements over standard LSTMs:
In summary, a research plan that explores more complex kinds of neurons may bear promising fruit. This is not unlike the research that explores the use of complex values in neural networks. In these complex-valued networks, performance improvements are noticed only on RNN networks. This should indicate that these internal neuron complexities may be necessary for capabilities beyond simple perception. I suspect that these complexities are necessary for advanced cognition that seems to evade current Deep Learning systems. These include robustness to adversarial features, learning to forget, learning what to ignore, learning abstraction and recognizing contextual switching.
I predict in the near future that we shall see more aggressive research in this area. After all, nature is already unequivocally telling us that neurons are individually more complex and therefore our own neuron models may also need to be more complex. Perhaps we need something as complicated as a Grassmann Algebra to make progress. ;-)
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc.
Deep Learning Patterns, Methodology and Strategy
|
Nityesh Agarwal | 2.4K | 13 | https://towardsdatascience.com/wth-does-a-neural-network-even-learn-a-newcomers-dilemma-bd8d1bbbed89?source=tag_archive---------9---------------- | “WTH does a neural network even learn??” — a newcomer’s dilemma | I believe, we all have that psychologist/philosopher in our brains that likes to ponder upon how thinking happens.
There.
A simple, clear bird’s eye view of what neural networks learn — they learn “increasingly more complex concepts”.
Doesn’t that feel familiar? Isn’t that how we learn anything at all?
For instance, let’s consider how we, as kids, probably learnt to recognise objects and animals —
See?
So, neural networks learn like we do!
It almost eases the mind to believe that we have this intangible sort of.. man-made “thing” that is analogous to the mind itself! It is especially appealing to someone who has just begun his/her Deep Learning journey.
But NO. A neural network’s learning is NOT ANALOGOUS to our own. Almost all the credible guides and ‘starters packs’ on the subject of deep learning come with a warning, something along the lines of:
..and that’s where all the confusion begins!
I think this was mostly because of the way in which most of the tutorials and beginner level books approach the subject.
Let’s see how Michael Nielsen describes what the hidden neurons are doing in his book — Neural Networks and Deep Learning:
He, like many others, uses the analogy between neural networks and the human mind to try to explain a neural networks. The way lines and edges make loops, which then help in recognising some digits is what we would think of doing. Many other tutorials try to use a similar analogy to explain what it means to build a hierarchy of knowledge.
I have to say that because of this analogy, I understand neural nets better.
But it is one of the paradoxes, that the very analogy that makes a difficult concept intelligible to the masses, can also create an illusion of knowledge among them.
Readers need to understand that it is just an analogy. Nothing more, nothing less. They need to understand that every simple analogy needs to be followed by more rigorous, seemingly difficult explanations.
Now don’t get me wrong. I am deeply thankful to Michael Nielsen for writing this book. It is one of the best books on the subject out there. He is careful in mentioning that this is “just for the sake of argument”.
But I took it to mean this — Maybe, the network won’t use the same exact pieces. Maybe, it will figure out some other pieces and join them in some other way to recognise the digits. But the essence will be the same. Right? I mean each of those pieces has to be some kind of an edge or a line or some loopy structure. After all, it doesn’t seem like there are other possibilities if you want to build a hierarchical structure to solve the problem of recognising digits.
As I gained a better intuition about them and how they work, I understood that this view is obviously wrong. It hit me..
Let’s consider loops —
Being able to identify a loop is essential for us humans to write digits- an 8 is two loops joined end-to-end, a 9 is loop with a tail under it and a 6 is loop with a tail up top. But when it comes to recognising digits in an image, features like loops seem difficult and infeasible for a neural network (Remember, I’m talking about your vanilla neutral networks or MLPs here).
I know its just a lot of “hand-wavy” reasoning but I think it is enough to convince. Probably, the edges and all the other hand-engineered features will face similar problems.
..and there’s the dilemma!
I had no clue about the answer or how to find it until 3blue1brown released a set of videos about neural networks. It was Grant Sanderson’s take at explaining the subject to newcomers. Maybe even he felt that there were some missing pieces in the explanation by other people and that he could address them in his tutorials.
And boy, did he!
Grant Sanderson of 3blue1brown, who uses a structure with 2 hidden layers, says —
The very loops and edges that we ruled out above.
They were not looking for loops or edges or anything even remotely close! They were looking for.. well something inexplicable.. some strange patterns that can be confused for random noise!
I found those weight matrix images (in the above screenshot) really fascinating. I thought of them as a Lego puzzle.
The weight matrix images were like the elementary Lego blocks and my task was to figure out a way to arrange them together so that I could create all 10 digits. This idea was inspired from the excerpt of Neural Networks and Deep Learning that I posted above. There we saw how we could assemble a 0 using hand-made features like edges and curves. So, I thought that, maybe, we could do the same with the features that the neural network actually found good.
All I needed was those weight matrix images that were used in 3blue1brown’s video. Now the problem was that Grant had put only 7 images in the video. So, I was gonna have to generate them on my own and create my very own set of Lego blocks!
I imported the code used in Michael Nielsen’s book to a Jupyter notebook. Then, I extended the Network class in there to include the methods that would help me visualise the weight matrices.
One pixel for every connection in the network. One image for each neuron showing how much it ‘likes’(colour: blue) or ‘dislikes’(colour: red) the previous layer neurons.
So, if I was to look at the image belonging to one of the neurons in the hidden layer, it would be like a heat map showing one feature, one basic Lego block that will be used to recognise digits. Blue pixels would represent connections that it “likes” whereas red ones would represent the connections that it “dislikes”.
I trained a neural network that had:
Notice that we will have 30 different types of basic Lego blocks for our Lego puzzle here because that’s the size of our hidden layer.
And.. here’s what they look like! —
These are the features that we were looking for! The ones that are better than loops and edges according to the network.
And here’s how it classifies all 10 digits:
And guess what?None of them make any sense!!
None of the features seem to capture any isolated distinguishable feature in the input image. All of them can be mistaken to be just randomly shaped blobs at randomly chosen places.
I mean, just look at how it identifies a ‘0':
This is the weight matrix image for the output neuron that recognizes ‘0's:
To be clear, the pixels in this image represent the weights connecting the hidden layer to the output neuron that recognises ‘0's.
We shall take only a handful of the most useful features for each digit into account. To do that, we can visually select the most intense blue pixels and the most intense red pixels. Here, the blue ones should give us the most useful features and the red ones should give us the most dreaded ones (think of it as the neuron saying — “The image will absolutely *not* match this prototype if it is a 0”).
Indices of the three most intense blue pixels: 3, 6, 26Indices of the three most intense red pixels: 5, 18, 22
Matrices 6 and 26 seem to capture something like a blue boundary of sorts that is surrounding inner red pixels — exactly what could actually help in identifying a ‘0’.
But what about matrix 3? It does not capture any feature we can even explain in words. The same goes for matrix 18. Why would the neuron not like it? It seems quite similar to matrix 3. And let’s not even go into the weird blue ‘S’ in 22.
Nonsensical, see!
Let’s do it for ‘1’:
Indices of the three most intense blue pixels: 0, 11, 16Indices of the top two most intense red pixels: 7, 20
I have no words for this one! I won’t even try to comment.
In what world can THOSE be used to identify 1’s !?
Now, the much anticipated ‘8’ (how will it represent the 2 loops in it??):
Top 3 most intense blue pixels: 1, 6, 14Top 3 most intense red pixels: 7, 24, 27
Nope, this isn’t any good either. There seem to be no loops like we were expecting it to have. But there is another interesting thing to notice in here — A majority of the pixels in the output layer neuron image (the one above the collage) are red. It seems like the network has figured out a way to recognise 8s using features that it does not like!
So, NO. I couldn’t put digits together using those features as Lego blocks. I failed real bad at the task.
But to be fair to myself, those features weren’t so much Lego-blocky either! Here’s why—
So, there it is. Neural networks can be said to learn like us if you consider the way they build hierarchies of features just like we do. But when you see the features themselves, they are nothing like what we would use. The networks give you almost no explanation for the features that they learn.
Neural networks are good function approximators. When we build and train one, we mostly just care about its accuracy — On what percentage of the test samples does it give positive results?
This works incredibly well for a lot of purposes because modern neural nets can have remarkably high accuracies — upward of 98% is not uncommon (meaning that the chances of failure are just 1 in a 100!)
But here’s the catch — When they are wrong, there’s no easy way to understand the reason why they are. They can’t be “debugged” in the traditional sense. For example, here’s an embarrassing incident that happened with Google because of this:
Understanding what neural networks learn is a subject of great importance. It is crucial to unleashing the true power of deep learning. It will help us in
A few weeks ago The New York Times Magazine ran a story about how neural networks were trained to predict the death of cancer patients with a remarkable accuracy.
Here’s what the writer, an oncologist, said:
I think I can strongly relate to this because of my little project. :-)
During the little project that I described earlier, I stumbled upon a few other results that I found really cool and worth sharing. So here they are —
Smaller networks:
I wanted to see how low I could make the hidden layer size while still getting a considerable accuracy across my test set. It turns out that with 10 neurons, the network was able to classify 9343 out of 10000 test images correctly. That’s 93.43% accuracy at classifying images that it has never seen with just 10 hidden neurons.
Just 10 different types of Lego blocks to recognise 10 digits!!
I find this incredibly fascinating.
Of course, these weights don’t make much sense either!
In case you are curious, I tried it with 5 neurons too and I got an accuracy of 86.65%; 4 neurons- accuracy 83.73%; below that it dropped very steeply — 3 neurons- 58.75%, 2 neurons- 22.80%.
Weight initialisation + regularisation makes a LOT of difference:
Just regularising your network and using good initialisations for the weights can have a huge effect on what your network learns.
Let me demonstrate.
I used the same network architecture, meaning same no. of layers and same no. of neurons in the layers. I then trained 2 Network objects- one without regularisation and using the same old np.random.randn() whereas in the other one I used regularisation along with np.random.randn()/sqrt(n). This is what I observed:
Yeah! I was shocked too!
(Note: I have shown the weight matrices associated with different index neurons in the above collage. This is because due to different initialisations, even the ones at the same index learn different features. So, I chose the ones that appear to make the effect most starking.)
To know more about weight initialisation techniques in neural networks I recommend that you start here.
If you want to discuss this article or any other project that you have in mind or really anything AI please feel free to comment below or drop me a message on LinkedIn, Facebook or Twitter. I have learnt a lot more about deep learning since I did the project in this article (like completing the Deep Learning Specialisation at Coursera!😄). Don’t hesitate to reach out if you think I could be of any help.
Thank you for reading! 😄 You can follow me on Twitter — https://twitter.com/nityeshaga; I won’t spam your feed. 😉
Originally published on the Zeolearn blog.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Reader, writer and a programmer.
Sharing concepts, ideas, and codes.
|