CHANNEL_NAME
stringclasses 2
values | URL
stringclasses 12
values | TITLE
stringclasses 12
values | DESCRIPTION
stringclasses 12
values | TRANSCRIPTION
stringclasses 12
values | SEGMENTS
stringclasses 12
values |
---|---|---|---|---|---|
Neural Networks: Zero to Hero | https://www.youtube.com/watch?v=TCH_1BHY58I | Building makemore Part 2: MLP | We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.).
Links:
- makemore on github: https://github.com/karpathy/makemore
- jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part2_mlp.ipynb
- collab notebook (new)!!!: https://colab.research.google.com/drive/1YIfmkftLrz6MPTOO9Vwqrop2Q5llHIGK?usp=sharing
- Bengio et al. 2003 MLP language model paper (pdf): https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf
- my website: https://karpathy.ai
- my twitter: https://twitter.com/karpathy
- (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/Hp2m3kheJn , for people who'd like to chat more and go beyond youtube comments
Useful links:
- PyTorch internals ref http://blog.ezyang.com/2019/05/pytorch-internals/
Exercises:
- E01: Tune the hyperparameters of the training to beat my best validation loss of 2.2
- E02: I was not careful with the intialization of the network in this video. (1) What is the loss you'd get if the predicted probabilities at initialization were perfectly uniform? What loss do we achieve? (2) Can you tune the initialization to get a starting loss that is much more similar to (1)?
- E03: Read the Bengio et al 2003 paper (link above), implement and try any idea from the paper. Did it work?
Chapters:
00:00:00 intro
00:01:48 Bengio et al. 2003 (MLP language model) paper walkthrough
00:09:03 (re-)building our training dataset
00:12:19 implementing the embedding lookup table
00:18:35 implementing the hidden layer + internals of torch.Tensor: storage, views
00:29:15 implementing the output layer
00:29:53 implementing the negative log likelihood loss
00:32:17 summary of the full network
00:32:49 introducing F.cross_entropy and why
00:37:56 implementing the training loop, overfitting one batch
00:41:25 training on the full dataset, minibatches
00:45:40 finding a good initial learning rate
00:53:20 splitting up the dataset into train/val/test splits and why
01:00:49 experiment: larger hidden layer
01:05:27 visualizing the character embeddings
01:07:16 experiment: larger embedding size
01:11:46 summary of our final code, conclusion
01:13:24 sampling from the model
01:14:55 google collab (new!!) notebook advertisement | Hi everyone. Today we are continuing our implementation of Makemore. Now in the last lecture we implemented the bi-gram language model and we implemented it both using counts and also using a super simple neural network that has single linear layer. Now this is the Jupyter Notebook that we built out last lecture and we saw that the way we approached this is that we looked at only the single previous character and we predicted the distribution for the character that would go next in the sequence and we did that by taking counts and normalizing them into probabilities so that each row here sums to 1. Now this is all well and good if you only have one character of previous context and this works and it's approachable. The problem with this model of course is that the predictions from this model are not very good because you only take one character of context so the model didn't produce very name like sounding things. Now the problem with this approach though is that if we are to take more context into account when predicting the next character in a sequence things quickly blow up and this table the size of this table grows and in fact it grows exponentially with the length of the context because if we only take a single character at a time that's 27 possibilities of context but if we take two characters in the past and try to predict the third one suddenly the number of rows in this matrix you can look at it that way is 27 times 27 so there's 729 possibilities for what could have come in the context. If we take three characters as the context suddenly we have 20 thousand possibilities of context and so there's just way too many rows of this matrix it's way too few counts for each possibility and the whole thing just kind of explodes and doesn't work very well. So that's why today we're going to move on to this bullet point here and we're going to implement a multi-layer perceptron model to predict the next character in a sequence and this modeling approach that we're going to adopt follows this paper Benjue et al. 2003 so I have the paper pulled up here. Now this isn't the very first paper that proposed the use of multi-layer perceptrons or neural networks to predict the next character or token in a sequence but it's definitely one that is was very influential around that time it is very often cited to stand in for this idea and I think it's a very nice write-up and so this is the paper that we're going to first look at and then implement. Now this paper has 19 pages so we don't have time to go into the full detail of this paper but I invite you to read it it's very readable interesting and has a lot of interesting ideas in it as well. In the introduction they described the exact same problem I just described and then to address it they proposed the following model. Now keep in mind that we are building a character level language model so we're working on the level of characters. In this paper we have a vocabulary of 17,000 possible words and they instead build a word level language model but we're going to still stick with the characters but we'll take the same modeling approach. Now what they do is basically they propose to take every one of these words 17,000 words and they're going to associate to each word a say 30-dimensional feature vector. So every word is now embedded into a 30-dimensional space you can think of it that way. So we have 17,000 points or vectors in a 30-dimensional space and that's you might imagine that's very crowded that's a lot of points for a very small space. Now in the beginning these words are initialized completely randomly so there's pride out that random but then we're going to tune these embeddings of these words using that propagation. So during the course of training of this neural network these points or vectors are going to basically move around in this space and you might imagine that for example words that have very similar meanings or there are indeed synonyms of each other might end up in a very similar part of the space and conversely words that mean very different things would go somewhere else in the space. Now their modeling approach otherwise is identical to ours. They are using a multi-linear neural network to predict the next word given the previous words and to train the neural network they are maximizing the log-black limit of the training data just like we did. So the modeling approach itself is identical. Now here they have a concrete example of this intuition. Why does it work? Basically suppose that for example you are trying to predict a dog was running in a blank. Now suppose that the exact phrase a dog was running in a has never occurred in a training data and here you are at sort of test time later when the model is deployed somewhere and it's trying to make a sentence and it's saying dog was running in a blank and because it's never encountered this exact phrase in the training set you're out of distribution as we say. Like you don't have fundamentally any reason to suspect what might come next but this approach actually allows you to get around that because maybe you didn't see the exact phrase a dog was running in a something but maybe you've seen similar phrases maybe you've seen the phrase the dog was running in a blank and maybe your network has learned that a and the are like frequently are interchangeable with each other and so maybe it took the embedding for a and the embedding for the and it actually put them like nearby each other in the space and so you can transfer knowledge through that embedding and you can generalize in that way. Similarly the network could know that cats and dogs are animals and they co-occur in lots of very similar contexts and so even though you haven't seen this exact phrase or if you haven't seen exactly walking or running you can through the embedding space transfer knowledge and you can generalize to novel scenarios. So let's now scroll down to the diagram of the neural network they have a nice diagram here and in this example we are taking three previous words and we are trying to predict the fourth word in a sequence. Now these three previous words as I mentioned we have a vocabulary of 17,000 possible words so every one of these basically are the index of the incoming word and because there are 17,000 words this is an integer between 0 and 16,999. Now there's also a lookup table that they call C. This lookup table is a matrix that is 17,000 by say 30 and basically what we're doing here is we're treating this as a lookup table and so every index is plucking out a row of this embedding matrix so that each index is converted to the 30-dimensional vector that corresponds to the embedding vector for that word. So here we have the input layer of 30 neurons for three words making up 90 neurons in total and here they're saying that this matrix C is shared across all the words so we're always indexing it to the same matrix C over and over for each one of these words. Next up is the hidden layer of this neural network. The size of this hidden neural layer of this neural net is a hop parameter. So we use the word hyper parameter when it's kind of like a design choice up to the designer of the neural net and this can be as large as you'd like or as small as you'd like so for example the size could be a hundred and we are going to go over multiple choices of the size of this hidden layer and we're going to evaluate how well they work. So say there were a hundred neurons here all of them would be fully connected to the 90 words or 90 numbers that make up these three words. So this is a fully connected layer and there's a 10-inch long linearity and then there's this output layer and because our 17,000 possible words that could come next this layer has 17,000 neurons and all of them are fully connected to all of these neurons in the hidden layer. So there's a lot of parameters here because there's a lot of words so most computation is here. This is the expensive layer. Now there are 17,000 logits here so on top of there we have the softmax layer which we've seen in our previous video as well. So every one of these logits is expedited and then everything is normalized to sum to one so that we have a nice probability distribution for the next word in the sequence. Now of course during training we actually have the label. We have the identity of the next word in the sequence. That word or its index is used to pluck out the probability of that word and then we are maximizing the probability of that word with respect to the parameters of this neural net. So the parameters are the weights and biases of this output layer, the weights and biases of this in the layer and the embedding lookup table C and all of that is optimized using backpropagation and these dashed arrows ignore those. That represents a variation of a neural net that we are not going to explore in this video. So that's the setup and now let's implement it. Okay so I started a brand new notebook for this lecture. We are importing by torch and we are importing matplotlibs so we can create figures. Then I am reading all the names into a list of words like I did before and I'm showing the first eight right here. Keep in mind that we have a 32,000 in total. These are just the first eight and then here I'm building out the vocabulary of characters and all the mappings from the characters as strings to integers and vice versa. Now the first thing we want to do is we want to compile the dataset for the neural network and I had to rewrite this code. I'll show you in a second what it looks like. So this is the code that I created for the dataset creation so let me first run it and then I'll briefly explain how this works. So first we're going to define something called block size and this is basically the context length of how many characters do we take to predict the next one. So here in this example we're taking three characters to predict the fourth one so we have a block size of three. That's the size of the block that supports the prediction. Then here I'm building out the x and y. The x are the input to the neural net and the y are the labels for each example inside x. Then I'm area over the first five words. I'm doing first five just four efficiency while we are developing all the code but then later we're going to come here and erase this so that we use the entire training set. So here I'm printing the word m up and here I'm basically showing the examples that we can generate the five examples that we can generate out of the single sort of word m up. So when we are given the context of just dot dot dot the first character in a sequence is E in this context the label SM when the context is this the label SM and so forth. And so the way I build this out is first I start with a padded context of just zero tokens. Then I iterate over all the characters I get the character in the sequence and I basically build out the array y of this current character and the array x which stores the current running context. And then here see I print everything and here I crop the context and enter the new character in a sequence. So this is kind of like a roll in the window of context. Now we can change the block size here to for example four. And in that case we would be predicting the fifth character given the previous four or it can be five and then it would look like this or it can be say 10 and then it would look something like this. We're taking 10 characters to predict the 11th one and we're always padding with dots. So let me bring this back to three just so that we have what we have here in the paper. And finally the data set right now looks as follows. From these five words we have created a data set of 32 examples and each input is a neural net is three integers and we have a label that is also an integer y. So x looks like this. These are the individual examples and then y are the labels. So given this let's now write a neural network that takes these x's and predicts to y's. First let's build the embedding lookup table C. So we have 27 possible characters and we're going to embed them in a lower dimensional space. In the paper they have 17,000 words and they embed them in spaces as small dimensional as 30. So they cram 17,000 words into 30 dimensional space. In our case we have only 27 possible characters. So let's cram them in something as small as to start with for example a two dimensional space. So this lookup table will be random numbers and we'll have 27 rows and we'll have two columns. Right so each 20 each one of 27 characters will have a two-dimensional embedding. So that's our matrix C of embeddings in the beginning initialized randomly. Now before we embed all of the integers inside the input x using this lookup table C let me actually just try to embed a single individual integer like say five. So we get a sense of how this works. Now one way this works of course is we can just take the C and we can index into row five and that gives us a vector the fifth row of C and this is one way to do it. The other way that I presented in the previous lecture is actually seemingly different but actually identical. So in the previous lecture what we did is we took these integers and we used the one-hot encoding to first encode them. So if that one hot we want to encode integer five and we want to tell it that their number of classes is 27. So that's the 26-dimensional vector of all zeros except the fifth bit is turned on. Now this actually doesn't work. The reason is that this input actually must be a two-shot tensor. And I'm making some of these errors intentionally just so you get to see some errors and how to fix them. So this must be a tensor not an int, fairly straightforward to fix. We get a one-hot vector. The fifth dimension is one and the shape of this is 27. And now notice that just as I briefly alluded to in a previous video if we take this one-hot vector and we multiply it by C then what would you expect? Well number one first you'd expect an error because expected scalar type long but found float. So a little bit confusing but the problem here is that one hot the data type of it is long. It's a 64-bit integer but this is a float tensor. And so PyTorch doesn't know how to multiply an int with a float and that's why we had to explicitly cast this to a float so that we can multiply. Now the output actually here is identical and that it's identical because of the way the matrix multiplication here works. We have the one-hot vector multiplying columns of C and because of all the zeros they actually end up masking out everything in C except for the fifth row which is blocked out. And so we actually arrive at the same result and that tells you that here we can interpret this first piece here this embedding of the integer. We can either think of it as the integer indexing into a lookup table C but equivalently we can also think of this little piece here as a first layer of this bigger neural net. This layer here has neurons that have no nonlinearity there's no 10H there are just linear neurons and their wake matrix is C. And then we are encoding integers into one hot and feeding those into a neural net and this first layer basically embeds them. So those are two equivalent ways of doing the same thing. We're just going to index because it's much much faster and we're going to discard this interpretation of one-hot inputs into neural nets and we're just going to index integers and create and use embedding tables. Now embedding a single integer like five is easy enough. We can simply ask by torch to retrieve the fifth row of C or the row index five of C. But how do we simultaneously embed all of these 32 by three integers stored in array X? Wattly by torch indexing is fairly flexible and quite powerful. So it doesn't just work to ask for a single element five like this. You can actually index using lists. So for example we can get the rows five six and seven and this will just work like this. We can index with a list. It doesn't just have to be a list it can also be a actually a tensor of integers. And we can index with that. So this is a integer tensor five six seven and this will just work as well. In fact we can also for example repeat row seven and retrieve it multiple times and that same index will just get embedded multiple times here. So here we are indexing with a one-dimensional tensor of integers. But it turns out that you can also index with multi-dimensional tensors of integers. Here we have a two-dimensional tensor of integers. So we can simply just do C at X and this just works. And the shape of this is 32 by 3 which is the original shape. And now for every one of those three two by three integers we've retrieved the embedding vector here. So basically we have that as an example the 13th or example index 13 the second dimension is the integer one as an example. And so here if we do C of X which gives us that array and then we index into 13 by 2 of that array then we get the embedding here. And you can verify that C at one which is the integer at that location is indeed equal to this. You see they're equal. So basically a long story short PyTorch indexing is awesome and to embed simultaneously all of the integers in X we can simply do C of X and that is our embedding and that just works. Now let's construct this layer here the hidden layer. So we have that W1 as I'll call it are these weights which we will initialize randomly. Now the number of inputs to this layer is going to be three times two right because we have two dimensional embeddings and we have three of them. So the number of inputs is six and the number of neurons in this layer is a variable up to us. Let's use 100 neurons as an example and then biases will be also initialized randomly as an example and let's and we just need 100 of them. Now the problem with this is we can't simply normally we would take the input in this case that's embedding and we'd like to multiply it with these weights and then we would like to add the bias. This is roughly what we want to do but the problem here is that these embeddings are stacked up in the dimensions of this impotenture. So this will not work this matrix multiplication because this is a shape 32 by 3 by 2 and I can't multiply that by 6 by 100. So somehow we need to concatenate these inputs here together so that we can do something along these lines which currently does not work. So how do we transform this 32 by 3 by 2 into a 32 by 6 so that we can actually perform this multiplication over here. I'd like to show you that there are usually many ways of implementing what you'd like to do in Torch and some of them will be faster, better, shorter, etc. And that's because Torch is a very large library and it's got lots and lots of functions. So if we just go to the documentation and click on Torch you'll see that my slider here is very tiny and that's because there are so many functions that you can call on these tensors to transform them, create them, multiply them, add them, perform all kinds of different operations on them. And so this is kind of like the space of possibility if you will. Now one of the things that you can do is we can control here, control off for concatenate and we see that there's a function torqued.cat, short for concatenate. And this concatenate is given sequence of tensors in a given dimension and these tensors must have the same shape, etc. So we can use the concatenate operation to in a naive way concatenate these three embeddings for each input. So in this case we have m of m of the shape. And really what we want to do is we want to retrieve these three parts and concatenate them. So we want to grab all the examples. We want to grab first the zero index and then all of this. So this plugs out the 32 by two embeddings of just the first word here. And so basically we want this guy. We want the first dimension and we want the second dimension. And these are the three pieces individually. And then we want to treat this as a sequence and we want to torqued.cat on that sequence. So this is the list torqued.cat takes a sequence of tensors. And then we have to tell it along which dimension to concatenate. So in this case all these are 32 by two and we want to concatenate not across dimension zero but across dimension one. So passing in one gives us a result that the shape of this is 32 by six exactly as we'd like. So that basically took 32 and squashed these back and concatenate them into 32 by six. Now this is kind of ugly because this code would not generalize if we want to later change the block size. Right now we have three inputs three words. But what if we had five then here we would have to change the code because I'm indexing directly. Well torqued comes to rescue again because that turns out to be a function called unbind and it removes a tensor dimension. So removes a tensor dimension returns a tuple of all slices along the given dimension without it. So this is exactly what we need. And basically when we call tors.unbind tors.unbind of m and passing dimension one index one. This gives us a list of a list of tensors exactly equivalent to this. So running this gives us a line three and it's exactly this list. And so we can call torched out cat on it and along the first dimension. And this works and this shape is the same. But now this is it doesn't matter if we have block size three or five or ten this will just work. So this is one way to do it. But it turns out that in this case there's actually a significantly better and more efficient way. And this gives me an opportunity to hint at some of the internals of torched out tensor. So let's create an array here of elements from zero to 17. And the shape of this is just 18. It's a single picture of 18 numbers. It turns out that we can very quickly we represent this as different sized and dimensional tensors. We do this by calling a view. And we can say that actually this is not a single vector of 18. This is a two by nine tensor. Or alternatively this is a nine by two tensor. Or this is actually a three by three by two tensor. As long as the total number of elements here multiply to be the same this will just work. And in PyTorch this operation calling that view is extremely efficient. And the reason for that is that in each tensor there's something called the underlying storage. And the storage is just the numbers always as a one dimensional vector. And this is how this tensor has represented in the computer memory. It's always a one dimensional vector. But when we call that view we are manipulating some of attributes of that tensor that dictate how this one dimensional sequence is interpreted to be an end-dimensional tensor. And so what's happening here is that no memory is being changed, copied, moved, or created when we call that view. The storage is identical. But when you call that view some of the internal attributes of the view of this tensor are being manipulated and changed. In particular that's something there's something called storage offset, strides, and shapes. And those are manipulated so that this one dimensional sequence of bytes is seen as different and dimensional arrays. There's a blog post here from Eric called PyTorch internals where he goes into some of this with respect to tensor and how the view of a tensor is represented. And this is really just like a logical construct of representing the physical memory. And so this is a pretty good blog post that you can go into. I might also create an entire video on the internals of Torch tensor and how this works. For here we just note that this is an extremely efficient operation. And if I delete this and come back to our end we see that the shape of our end is 3 2 by 3 by 2. But we can simply ask for PyTorch to view this instead as a 3 2 by 6. And the way that gets flattened into a 3 2 by 6 array just happens that these two get stacked up in a single row. And so that's basically the concatenation operation that we're after. And you can verify that this actually gives the exact same result as what we had before. So this is an element y equals and you can see that all the elements of these two tensors are the same. And so we get the exact same result. So long story short we can actually just come here. And if we just view this as a 3 2 by 6 instead then this multiplication will work and give us the hidden states that were after. So if this is h then h dot shape is now the 100 dimensional activations for every one of our 32 examples. And this gives the desired result. Let me do two things here. Number one let's not use 32. We can for example do something like m dot shape at zero so that we don't hard code these numbers and this would work for any size of this m or alternatively we can also do negative one. When we do negative one, PytroTroll and Fur what this should be. Because the number of elements must be the same and we're saying that this is 6. PytroTroll derived that this must be 32 or whatever else it is if m is of different size. The other thing is here one more thing I'd like to point out is here when we do the concatenation this actually is much less efficient because this concatenation would create a whole new tensor with a whole new storage so new memory is being created because there's no way to concatenate tensors just by manipulating the view attributes. So this is inefficient and creates all kinds of new memory. So let me repeat this now. We don't need this and here to calculate H we want to also dot 10 H of this ticket our. Oops to get our H. So these are now numbers between negative one and one because of the 10 H and we have that the shape is 32 by 100 and that is basically this hidden layer of activations here for every one of our 32 examples. Now there's one more thing I've lost over that we have to be very careful with and that this and that's this plus here. In particular we want to make sure that the broadcasting will do what we like. The shape of this is 32 by 100 and the one's shape is 100. So we see that the addition here will broadcast these two and in particular we have 32 by 100 broadcasting to 100. So broadcasting will align on the right create a fake dimension here. So this will become a one by 100 row vector and then it will copy vertically for every one of these rows of 32 and do an element wise addition. So in this case the correct thing will be happening because the same bias vector will be added to all the rows of this matrix. So that is correct. That's what we'd like and it's always good practice just make sure so that you don't treat yourself in the foot. And finally let's create the final layer here. So let's create W2 and V2. The input now is 100 and the output number of neurons will be for us 27 because we have 27 possible characters that come next. So the biases will be 27 as well. So therefore the low jits which are the outputs of this neural net are going to be H multiplied by W2 plus B2. Loads is that shape is 32 by 27 and the low jits look good. Now exactly as we saw in the previous video we want to take these low jits and we want to first experiment shape them to get our fake counts and then we want to normalize them into a probability. So prob is counts divide and now counts that sum along the first dimension and keep them as true exactly as in the previous video. And so prob that shape now is the R2 by 27 and you'll see that every row of prob sums to one so it's normalized. So that gives us the probabilities. Now of course we have the actual letter that comes next and that comes from this array why which we created during the data separation. So why is this last piece here which is the unethically of the next character in a sequence that we'd like to now predict. So what we'd like to do now is just as in the previous video we'd like to index into the rows of prob and each row we'd like to pluck out the probability assigned to the correct character as given here. So first we have torshtot range of 32 which is kind of like an iterator over numbers from 0 to 31 and then we can index into prob in the following way. Prob in torshtot range of 32 which it erased the roads and then each row we'd like to grab this column as given by why. So this gives the current probabilities as assigned by this neural network with this setting of its weights to the correct character in the sequence. And you can see here that this looks okay for some of these characters like this is basically point two but it doesn't look very good at all for many other characters. Like this is 0.0701 probability and so the network thinks that some of these are extremely unlikely but of course we haven't trained the neural network yet. So this will improve and ideally all of these numbers here of course are one because then we are correctly predicting the next character. Now just as in the previous video we want to take these probabilities. We want to look at the lock probability and then we want to look at the average rock probability and the negative of it to create the negative log likelihood loss. So the loss here is 17 and this is the loss that we'd like to minimize to get the network to predict the correct character in the sequence. Okay so I rewrote everything here and made it a bit more respectable. So here's our data set. Here's all the parameters that we defined. I'm now using a generator to make it reproducible. I clustered all the primers into a single list of primers so that for example it's easy to count them and see that in total we currently have about 3,400 primers and this is the forward pass as we developed it and we arrive at a single number here the loss that is currently expressing how well this neural network works with the current setting of primers. Now I would like to make it even more respectable. So in particular see these lines here where we take the logits and we calculate a loss. We're not actually reinventing the wheel here. This is just classification and many people use classification and that's why there is a functional dot cross entropy function in PyTorch to calculate this much more efficiently. So we could just simply call f dot cross entropy and we can pass in the logits and we can pass in the array of targets. Why? And this calculates the exact same loss. So in fact we can simply put this here and erase these three lines and we're going to get the exact same result. Now there are actually many good reasons to prefer f dot cross entropy over rolling your own implementation like this. I did this for educational reasons but you'd never use this in practice. Why is that? Number one when you use f dot cross entropy PyTorch will not actually create all these intermediate tensors because these are all new tensors in memory and all this is fairly inefficient to run like this. Instead PyTorch will cluster up all these operations and very often create fused kernels that very efficiently evaluate these expressions that are sort of like clustered mathematical operations. Number two the backward pass can be made much more efficient and not just because it's a fused kernel but also analytically and mathematically it's much it's often a very much simpler backward pass to implement. We actually sell this with micrograd. You see here when we implemented 10h the forward pass of this operation to calculate the 10h was actually fairly complicated mathematical expression but because it's a clustered mathematical expression when we did the backward pass we didn't individually backward through the x and the two times and the minus one and division etc. We just said it's 1 minus t squared and that's a much simpler mathematical expression and we were able to do this because we're able to reuse calculations and because we are able to mathematically and analytically derive the derivative and often that expression simplifies mathematically and so there's much less to implement. So not only can it be made more efficient because it runs in a fused kernel but also because the expressions can take a much simpler form mathematically. So that's number one. Number two under the hood f dot cross entropy can also be significantly more numerically well behaved. Let me show you an example of how this works. Suppose we have a logit of negative two three negative three zero and five and then we are taking the exponent of it and normalizing it to sum to one. So when logits take on this values everything is well and good and we get a nice probability distribution. Now consider what happens when some of these logits take on more extreme values and that can happen during optimization of neural network. Suppose that some of these numbers grow very negative like say negative 100 then actually everything will come out fine. We still get a probabilities that you know are well behaved and they sum to one and everything is great but because of the way the exports if you have very positive logits like say positive 100 in here you actually start to run into trouble and we get not a number here and the reason for that is that these counts have an inf here. So if you pass in a very negative number two pecs you just get a very negative, sorry not negative but very small number very near zero and that's fine. But if you pass in a very positive number suddenly we run out of range in our floating point number that represents these counts. So basically we're taking E and we're raising it to the power of 100 and that gives us inf because we run out of dynamic range on this floating point number that is count. And so we cannot pass very large logits through this expression. Now let me reset these numbers to something reasonable. The way PyTorch solved this is that you see how we have a really well behaved result here. It turns out that because of the normalization here you can actually offset logits by any arbitrary constant value that you want. So if I add one here you actually get the exact same result or if I add two or if I subtract three any offset will produce the exact same probabilities. So because negative numbers are okay but positive numbers can actually overflow this exp. What PyTorch does is it internally calculates the maximum value that occurs in the logits and it subtracts it. So in this case it would subtract five. And so therefore the greatest number in logits will become zero and all the other numbers will become some negative numbers. And then the result of this is always well behaved. So even if we have 100 here previously not good but because PyTorch will subtract 100 this will work. And so there's many good reasons to call cross entropy. Number one the forward pass can be much more efficient. The backward pass can be much more efficient and also thinks it can be much more numerically well behaved. Okay so let's now set up the training of this neural net. We have the forward pass. We don't need these because that we have that loss is equal to half that cross entropy. That's the forward pass. Then we need the backward pass. First we want to set the gradients to be zero. So for P in parameters we want to make sure that P dot grad is none which is the same as setting it to zero in PyTorch. And then lost a backward to populate those gradients. Once we have the gradients we can do the parameter update. So for P in parameters we want to take all the dear and we want to nudge it learning rate times P dot grad. And then we want to repeat this a few times. And let's print the loss here as well. Now this once the vice and it will create an error because we also have to go for P in parameters. And we have to make sure that P dot requires grad is set to true in PyTorch. And this should just work. Okay so we started off with loss of 17 and we're decreasing it. Lots run longer. And you see how the loss decreases a lot here. So if we just run for a thousand times we get a very very low loss. And that means that we're making very good predictions. Now the reason that this is so straightforward right now is because we're only overfitting 32 examples. So we only have 32 examples of the first five words. And therefore it's very easy to make this neural net fit only these 32 examples because we have 3,400 parameters and only 32 examples. So we're doing what's called overfitting a single batch of the data and getting a very low loss and good predictions. But that's just because we have so many parameters for so few examples. So it's easy to make this be very low. Now we're not able to achieve exactly zero. And the reason for that is we can for example look at low juts which are being predicted. And we can look at the max along the first dimension and in PyTorch max reports both the actual values that take on the maximum number but also the indices of ease. And you'll see that the indices are very close to the labels. But in some cases they differ. For example in this very first example the predicted index is 19 but the label is 5. And we're not able to make loss be zero. And fundamentally that's because here the very first or the 0th index is the example where dot dot dot is supposed to predict E. But you see how dot dot dot is also supposed to predict and O. And dot dot dot is also supposed to predict in the eye and then S as well. And so basically E O A or S are all possible outcomes in a training set for the exact same input. So we're not able to completely overfit and and make the last big exactly zero. But we're getting very close in the cases where there's a unique input for a unique output. In those cases we do what's called overfit and we basically get the exact same and the exact correct result. So now all we have to do is we just need to make sure that we read in the full data set and optimize the neural line. Okay so let's swing back up where we created the data set and we see that here we only use the first five words. So let me now erase this and let me erase the print statements otherwise be be printing way too much. And so when we process the full data set of all the words we now had 228,000 examples instead of just 32. So let's now scroll back down to this as much larger. We initialize the weights the same number of parameters they all require gradients. And then let's push this print our lost item to be here and let's just see how the optimization goes if we run this. Okay so we started with a fairly high loss and then as we're optimizing the loss is coming down. But you'll notice that it takes quite a bit of time for every single iteration. So let's actually address that because we're doing way too much work forwarding and backwarding 220,000 examples. In practice what people usually do is they perform forward and backward pass an update on many batches of the data. So what we will want to do is we want to randomly select some portion of the data set and that's a mini batch and then only forward backward and update on that little mini batch. And then we erase on those mini batches. So in PyTorch we can for example use tors.randent. We can generate numbers between 0 and 5 and make 32 of them. I believe the size has to be a tuple in PyTorch. So we can have a tuple 32 of numbers between 0 and 5. But actually we want x. shape of 0 here. And so this creates integers that index into our data set and there's 32. So if our mini batch size is 32 then we can come here and we can first do mini batch construct. So integers that we want to optimize in this single iteration are in the Ix and then we want to index into x with Ix to only grab those rows. So we're only getting 32 rows of x and therefore embeddings will again be 32 by 3 by 2. Not 200,000 by 3 by 2. And then this Ix has to be used not just to index into x but also to index into y. And now this should be mini batches and this should be much much faster. So okay so it's instant almost. So this way we can run many many examples, nearly instantly and decrease the loss much much faster. Now because we're only doing with many batches the quality of our gradient is lower. So the direction is not as reliable. It's not the actual gradient direction. But the gradient direction is good enough even when it's estimating on only 32 examples that it is useful. And so it's much better to have an approximate gradient and just make more steps than it is to evaluate the exact gradient and take fewer steps. So that's why in practice this works quite well. So let's now continue the optimization. Let me take out this lost item from here and place it over here at the end. Okay so we're hovering around 2.5 or so. However this is only the loss for that mini batch. So let's actually evaluate the loss here for all of x and for all of y. Just so we have a full sense of exactly how well the model is doing right now. So right now we're at about 2.7 on the entire training set. So let's run the optimization for a while. Okay we're at 2.6, 2.5, 7, 2.5, 3. Okay so one issue of course is we don't know if we're stepping too slow or too fast. So this point one I just guessed it. So one question is how do you determine this learning rate? And how do we gain confidence that we're stepping in the right sort of speed? So I'll show you one way to determine a reasonable learning rate. It works as follows. Let's reset our parameters to the initial settings. And now let's print an every step. But let's only do 10 steps or so or maybe maybe 100 steps. We want to find like a very reasonable set the search range if you will. So for example this is like very low. Then we see that the loss is barely decreasing. So that's not that's like too low basically. So let's try this one. Okay so we're decreasing the loss but like not very quickly. So that's a pretty good low range. Now let's reset it again. And now let's try to find the place at which the loss kind of explodes. So maybe at negative one. Okay we see that we're minimizing the loss but you see how it's kind of unstable. It goes up and down quite a bit. So negative one is probably like a fast learning rate. Let's try negative 10. Okay so this isn't optimizing. This is not working very well. So negative 10 is way too big. Negative one was already kind of big. So therefore negative one was like somewhat reasonable if I reset. So I'm thinking that the right learning rate is somewhere between negative 0.001 and negative one. So the way we can do this here is we can use torque shut line space. And we want to basically do something like this between 0 and one but a number of steps is one more parameter that's required. Let's do a thousand steps. This creates 1000 numbers between 0.001 and 1. But it doesn't really make sense to step between these linearly. So instead let me create learning rate exponent. And instead of 0.001 this will be a negative three and this will be a zero. And then the actual errors that we want to search over are going to be 10 to the power of LRE. So now what we're doing is we're stepping linearly between the exponents of these learning rates. This is 0.001 and this is 1 because 10 to the power of 0 is 1. And therefore we are spaced exponentially in this interval. So these are the candidate learning rates that we want to sort of like search over roughly. So now what we're going to do is here we are going to run the optimization for 1000 steps. And instead of using a fixed number we are going to use learning rate indexing into here lRs of i and make this i. So basically let me reset this to be again starting from random. Creating these learning rates between negative 0.001 and 1 but exponentially stepped. And here what we're doing is we're iterating a thousand times. We're going to use the learning rate that's in the beginning very very low. In the beginning it's going to be 0.001 but by the end it's going to be 1. And then we're going to step with that learning rate. And now what we want to do is we want to keep track of the learning rates that we used. And we want to look at the losses that resulted. And so here let me track stats. So lRi.append.plr and loss.append.loss.item. Okay so again reset everything and then run. And so basically we started with a very low learning rate and we went all the way up to learning rate of negative 1. And now what we can do is we can pedal to that plot and we can plot the two. So we can plot the learning rates on the x-axis and the losses we saw on the y-axis. And often you're going to find that your plot looks something like this. Where in the beginning you have very low learning rates. We basically anything barely anything happened. Then we got to like a nice spot here. And then as we increased the learning rate enough we basically started to be kind of unstable here. So a good learning rate turns out to be somewhere around here. And because we have lRi here we actually may want to do not lR not the learning rate but the exponent. So that would be the lRi at i is maybe what we want to log. So let me reset this and redo that calculation. But now on the x-axis we have the exponent of the learning rate. And so we can see the exponent of the learning rate that is good to use. It would be sort of like roughly in the valley here. Because here the learning rates are just way too low. And then here we expect relatively good learning rate somewhere here. And then here things are starting to explode. So somewhere around negative 1 x the exponent of the learning rate is a pretty good setting. And 10 to the negative 1 is 0.1. So 0.1 is actually a fairly good learning rate around here. And that's what we had in the initial setting. But that's roughly how you would determine it. And so here now we can take out the tracking of these. And we can just simply set a lR to be 10 to the negative 1 or basically otherwise 0.1 as it was before. And now we have some confidence that this is actually a fairly good learning rate. And so now what we can do is we can crank up the iterations. We can reset our optimization. And we can run for a pretty long time using this learning rate. Oops. And we don't want to print. It's way too much printing. So let me again reset and run 10,000 steps. Okay, so we're 0.2 2.48 roughly. Let's run another 10,000 steps. 2.46. And now let's do one learning rate decay. What this means is we're going to take our learning rate and we're going to 10x lower it. And so over at the late stages of training potentially. And we may want to go a bit slower. Let's do one more actually at point one just to see if we're making an indent here. Okay, we're still making dent. And by the way the bi-gram loss that we achieved last video was 2.45. So we've already surpassed the bi-gram level. And once I get a sense that this is actually kind of starting to plateau off, people like to do as I mentioned this learning rate decay. So let's try to decay the loss, the learning rate I mean. And we achieve it about 2.3 now. Obviously this is janky and not exactly how you train it in production. But this is roughly what you're going through. You first find a decent learning rate using the approach that I showed you. Then you start with that learning rate and you train for a while. And then at the end people like to do a learning rate decay where you decay the learning rate by say a factor of 10 and you do a few more steps. And then you get a trained network roughly speaking. So we've achieved 2.3 and dramatically improved on the bi-gram language model using this simple neural net as described here using these 3,400 parameters. Now there's something we have to be careful with. I said that we have a better model because we are achieving a lower loss 2.3 much lower than 2.45 with the bi-gram model previously. Now that's not exactly true. And the reason that's not true is that this is actually fairly small model. But these models can get larger and larger if you keep adding neurons and parameters. So you can imagine that we don't potentially have a thousand parameters. We could have 10,000 or 100,000 or millions of parameters. And as the capacity of the neural network grows it becomes more and more capable of overfitting your training set. What that means is that the loss on the training set on the data that you're training on will become very very low as low as zero. But all that the model is doing is memorizing your training set for bigum. So if you take that model and it looks like it's working really well but you try to sample from it you will basically only get examples exactly as they are in the training set. You won't get any new data. In addition to that if you try to evaluate the loss on some withheld names or other words you will actually see that the loss on those can be very high. As a basically it's not a good model. So the standard in the field it is to split up your data set into three splits as we call them. We have the training split, the dev split or the validation split and the test split. So training split test or sorry dev or validation split and test split. And typically this would be say 80% of your data set. This could be 10% and this 10% roughly. So you have these three splits of the data. Now these 80% of your trainings of the data set, the training set is used to optimize the parameters of the model just like we're doing here using gradient descent. These 10% of the examples the dev or validation split they're used for development over all the hyper parameters of your model. So hyper primers are for example the size of this hidden layer, the size of the embedding. So this is a hundred or a two for us or we could try different things. The strength of the realization which we aren't using yet so far. So there's lots of different hyper primers and settings that go into defining in your lot. And you can try many different variations of them and see whichever one works best on your validation split. So this is used to train the primers. This is used to train the hyper primers and test split is used to evaluate basically the performance of the model at the end. So we're only evaluating the loss on the test split very very sparingly and very few times because every single time you evaluate your test loss and you learn something from it. You are basically starting to also train on the test split. So you are only allowed to test the loss on the test set very very few times. Otherwise you risk overfitting to it as well as you experiment on your model. So let's also split up our training data into train, dev and test. And then we are going to train on train and only evaluate on test very very sparingly. Okay so here we go. Here is where we took all the words and put them into x and y tensors. So instead let me create a new cell here and let me just copy paste some code here because I don't think it's that complex but we're gonna try to save a little bit of time. I'm converting this to be a function now and this function takes some list of words and builds the erase x and y for those words only. And then here I am shuffling up all the words. So these are the input words that we get. We are randomly shuffling them all up. And then we're going to set n1 to be the number of examples that is 80% of the words and n2 to be 90% of the way of the words. So basically if length of words is 30,000 and one is also I should probably run this. n1 is 25,000 and n2 is 28,000. And so here we see that I'm calling build data set to build the training set x and y by indexing into up to n1. So we're going to have only 25,000 training words. And then we're going to have roughly n2 minus n1 3,000 validation examples or dev examples. And we're going to have a length of words basically minus n2 or 3,200 and 4 examples here for the test set. So now we have x is and y's for all those three splits. Oh yeah I'm printing their size here inside it function as well. But here we don't have words but these are already the individual examples made from those words. So let's now scroll down here. And the data set now for training is more like this. And then when we reset the network, when we're training, we're only going to be training using x train x train and y train. So that's the only thing we're training on. Let's see where we are on a single batch. Let's now train maybe a few more steps. Training on neural hours can take a while. Usually you don't do it in line. You launch a bunch of jobs and you wait for them to finish. You can take multiple days and so on. Luckily this is a very small network. Okay so the loss is pretty good. Oh we accidentally used our learning rate. That is way too low. So let me actually come back. We used the the K learning rate of 0.01. So this will train faster. And then here when we evaluate, let's use the dev set here. X dev and Y dev to evaluate the loss. Okay. And let's not decay the learning rate and only do say 10,000 examples. And let's evaluate the dev loss once here. Okay so we're getting about 2.3 on dev. And so the neural network running was training did not see these dev examples. It hasn't optimized on them. And yet when we evaluate the loss on these dev, we actually get a pretty decent loss. And so we can also look at what the loss is on all of training set. Oops. And so we see that the training and the dev loss are about equal. So we're not overfitting. This model is not powerful enough to just be purely memorizing the data. And so far we are what's called underfitting because the training loss and the dev or test losses are roughly equal. So what that typically means is that our network is very tiny, very small. And we expect to make performance improvements by scaling up the size of this neural net. So let's do that now. So let's come over here. And let's increase the size within your net. The easiest way to do this is we can come here to the hidden layer, which currently is 100 neurons. And let's just bump this up. So let's do 300 neurons. And then this is also 300 biases. And here we have 300 inputs into the final layer. So let's initialize our neural net. We now have 10,000, 10,000 parameters instead of 3,000 parameters. And then we're not using this. And then here what I'd like to do is I'd like to actually keep track of that. Okay, let's just do this. Let's keep stats again. And here when we're keeping track of the loss, let's just also keep track of the steps. And let's just have eye here. And let's train on 30,000 or rather say, okay, let's try 30,000. And we are at 0.1. And we should alter on this, not as near a lot. And then here basically I want to plt dot plot the steps and things to the loss. So these are the x's and the y's. And this is the last function and how it's being optimized. Now you see that there's quite a bit of thickness to this. And that's because we are optimizing over these mini batches. And the mini batches create a little bit of noise in this. Where are we in the deficit? We are at 2.5. So we're still having to optimize this neural net very well. And that's probably because we make it bigger. It might take longer for this neural net to converge. And so let's continue training. Yeah, let's just continue training. One possibility is that the batch size is solo that we just have way too much noise in the training. And we may want to increase the batch size so that we have a bit more correct gradient. And we're not thrashing too much. And we can actually like optimize more properly. Okay. This will now become meaningless because we've re-initialized these. So yeah, this looks not pleasing right now. But the problem is look at tiny improvement, but it's so hard to tell. Let's go again. 2.5.2. Let's try to decrease the learning rate by factor of 2. Okay, we're 2.3.2. Let's continue training. We basically expect to see a lower loss than what we had before because now we have a much, much bigger model. And we were underfitting. So we'd expect that increasing the size of the model should help the neural net. 2.3.2. Okay, so that's not happening too well. Now, one other concern is that even though we've made the 10H layer here or the hidden layer much, much bigger, it could be that the bottleneck of the network right now are these embeddings that are too dimensional. It can be that we're just cramming way too many characters into just two dimensions. And the neural net is not able to really use that space effectively. And that that is sort of like the bottleneck to our networks performance. Okay, 2.23. So just by decreasing the learning rate, I was able to make quite a bit of progress. Let's run this one more time. And then evaluate the training and the dev loss. Now, one more thing after training that I'd like to do is I'd like to visualize the embedding vectors for these characters before we scale up the embedding size from 2. Because we'd like to make this bottleneck potentially go away. But once I make this greater than two, we won't be able to visualize them. So here, okay, we're at 2.23 and 2.24. So we're not improving much more. And maybe the bottleneck now is the character embedding size, which is two. So here I have a bunch of code that will create a figure. And then we're going to visualize the embeddings that were trained by the neural net on these characters. Because right now the embedding size is just two. So we can visualize all the characters with the x and the y coordinates as the two embedding locations for each of these characters. And so here are the x coordinates and the y coordinates, which are the columns of c. And then for each one, I also include the text of the little character. So here, what we see is actually kind of interesting. The network has basically learned to separate out the characters and cluster them a little bit. So for example, you see how the vowels, A, E, I, O, U are clustered up here. So what that's telling us is that the neural net treats these is very similar, right? Because when they feed into the neural net, the embedding for all these characters is very similar. And so the neural net thinks that they're very similar and kind of like interchangeable. And that makes sense. Then the points that are like really far away are, for example, Q. Q is kind of treated as an exception. And Q has a very special embedding vector, so to speak. Similarly, dot, which is a special character is all the way out here. And a lot of the other letters are sort of like clustered up here. And so it's kind of interesting that there's a little bit of structure here after the training. And it's not definitely not random. And these embeddings make sense. So we're now going to scale up the embedding size and won't be able to visualize it directly. And we expect that because we're underpinning and we made this layer much bigger and did not sufficiently improve the loss, we're thinking that the constraint to better performance right now could be these embedding vectors. So let's make them bigger. Okay, so let's crawl up here. And now we don't have two dimensional embeddings. We are going to have, say, 10 dimensional embeddings for each word. Then this layer will receive three times 10. So 30 inputs will go into the hidden layer. Let's also make the hidden layer a bit smaller. So instead of 300, let's just do 200 neurons in that hidden layer. So now the total number of elements will be slightly bigger at 11,000. And then we here, we have to be a bit careful because, okay, the learning rate we set to point one. Here we are a hard code in six. And obviously if you're working in production, you don't want to be hard coding magic numbers. But instead of six, this should now be 30. And let's run for 50,000 iterations and let me split out the initialization here outside so that when we run this a multiple times is not going to wipe out our loss. In addition to that here, let's instead of logging in lost items, let's actually log the, let's do log 10, I believe that's a function of the loss. And I'll show you why in a second, let's optimize this. Basically, I'd like to plot the log loss instead of the loss because when you plot the loss, many times it can have this hockey stick appearance and log squashes it in. So it just kind of looks nicer. So the x-axis is step i and the y-axis will be the loss i. And then here this is 30. Ideally, we wouldn't be hard coding these. Because let's look at the loss. Okay, it's again very thick because the mini batch size is very small. But the total loss over the training set is 2.3 and the the test or the dev set is 2.3 as well. So so far so good. Let's try to now decrease the learning rate by a factor of 10 and train for another 50,000 iterations. We'd hope that we would be able to beat 2.3. But again, we're just kind of like doing this very haphazardly. So I don't actually have confidence that our learning rate is set very well. That our learning rate decay, which we just do at random is set very well. And so the optimization here is kind of suspects to be honest. And this is not how you would do a typically production. In production, you would create parameters or hyper parameters out of all these settings. And then you would run lots of experiments and see whichever ones are working well for you. Okay, so we have 2.17 now and 2.2. Okay, so you see how the training and the validation performance are starting to slightly slowly depart. So maybe we're getting the sense that the neural net is getting good enough or that number parameters are large enough that we are slowly starting to overfit. Let's maybe run one more iteration of this and see where we get. But yeah, basically you would be running lots of experiments and then you are slowly scrutinizing whichever ones give you the best death performance. And then once you find all the hyper parameters that make your death performance good, you take that model and you evaluate the test set performance a single time. And that's the number that you report in your paper or wherever else you want to talk about and brag about your model. So let's then rerun the plot and rerun the train and death. And because we're getting lower loss now, it is the case that the embedding size of these was holding us back very likely. Okay, so 2.16 to 0.19 is what we're roughly getting. So there's many ways to go from many ways to go from here. We can continue tuning the optimization. We can continue for example playing with the size of the neural net or we can increase the number of words or characters in our case that we are taking as an input. So instead of just three characters, we could be taking more characters than as an input. And that could further improve the loss. Okay, so I changed the code slightly. So we have here 200,000 steps of the optimization. And in the first 100,000, we're using a learning rate of 0.1. And then in the next 100,000, we're using a learning rate of 0.01. This is the loss that I achieve. And these are the performance on the training and validation loss. And in particular, the best validation loss I've been able to obtain in the last 30 minutes or so is 2.17. So now I invite you to beat this number. And you have quite a few knobs available to you to I think surpass this number. So number one, you can of course change the number of neurons in the hidden layer of this model. You can change the dimensionality of the embedding lookup table. You can change the number of characters that are feeding in as an input, as the context into this model. And then of course, you can change the details of the optimization. How long are we running? What is the learning rate? How does it change over time? How does it decay? You can change the batch size and you may be able to actually achieve a much better convergence speed in terms of how many seconds or minutes it takes to train the model and get your result in terms of really good loss. And then of course, I actually invite you to read this paper. It is 19 pages, but at this point you should actually be able to read a good chunk of this paper and understand pretty good chunks of it. And this paper also has quite a few ideas for improvements that you can play with. So all of those are not available to you and you should be able to beat this number. I'm leaving that as an exercise to the reader and that's it for now and I'll see you next time. Before we wrap up, I also wanted to show how you would sample from the model. So we're going to generate 20 samples. At first we begin with all dots. So that's the context. And then until we generate the zeroed character again, we're going to embed the current context using the embedding table C. Now usually here, the first dimension was the size of the training set, but here we're only working with a single example that we're generating. So this is just the mission one, just for simplicity. And so this embedding then gets projected into the state. You get the logits. Now we calculate the probabilities. For that, you can use f dot softmax of logits. And that just basically exponentially is the logits and makes them sum to one. And similar to cross entropy, it is careful that there's no overflows. Once we have the probabilities, we sample from them using torshot multinomial to get our next index. And then we shift the context window to append the index and record it. And then we can just decode all the integers to strings and print them out. And so these are some example samples. And you can see that the model now works much better. So the words here are much more word like or name like. So we have things like ham, joes, lele, it started to sound a little bit more name like. So we're definitely making progress, but we can still improve on this model quite a lot. Okay, sorry, there's some bonus content. I wanted to mention that I want to make these notebooks more accessible. And so I don't want you to have to like install your bare notebooks and torture everything else. So I will be sharing a link to Google collab. And the Google collab will look like a notebook in your browser. And you can just go to URL and you'll be able to execute all of the code that you saw in the Google collab. And so this is me executing the code in this lecture. And I shortened it a little bit. But basically you're able to train the exact same network and then plot and sample from the model. And everything is ready for you to like tinker with the numbers right there in your browser. No installation necessary. So I just wanted to point that out and the link to this will be in the video description. | [{"start": 0.0, "end": 5.94, "text": " Hi everyone. Today we are continuing our implementation of Makemore. Now in the last"}, {"start": 5.94, "end": 9.32, "text": " lecture we implemented the bi-gram language model and we implemented it both"}, {"start": 9.32, "end": 13.76, "text": " using counts and also using a super simple neural network that has single"}, {"start": 13.76, "end": 20.04, "text": " linear layer. Now this is the Jupyter Notebook that we built out last lecture and"}, {"start": 20.04, "end": 23.76, "text": " we saw that the way we approached this is that we looked at only the single"}, {"start": 23.76, "end": 27.64, "text": " previous character and we predicted the distribution for the character that would"}, {"start": 27.64, "end": 31.92, "text": " go next in the sequence and we did that by taking counts and normalizing them"}, {"start": 31.92, "end": 38.24, "text": " into probabilities so that each row here sums to 1. Now this is all well and good"}, {"start": 38.24, "end": 42.760000000000005, "text": " if you only have one character of previous context and this works and it's"}, {"start": 42.760000000000005, "end": 47.84, "text": " approachable. The problem with this model of course is that the predictions from"}, {"start": 47.84, "end": 51.88, "text": " this model are not very good because you only take one character of context so"}, {"start": 51.88, "end": 57.56, "text": " the model didn't produce very name like sounding things. Now the problem with"}, {"start": 57.56, "end": 61.580000000000005, "text": " this approach though is that if we are to take more context into account when"}, {"start": 61.580000000000005, "end": 65.12, "text": " predicting the next character in a sequence things quickly blow up and this"}, {"start": 65.12, "end": 69.68, "text": " table the size of this table grows and in fact it grows exponentially with the"}, {"start": 69.68, "end": 73.52000000000001, "text": " length of the context because if we only take a single character at a time that's"}, {"start": 73.52000000000001, "end": 78.6, "text": " 27 possibilities of context but if we take two characters in the past and try to"}, {"start": 78.6, "end": 83.04, "text": " predict the third one suddenly the number of rows in this matrix you can look at it"}, {"start": 83.04, "end": 88.56, "text": " that way is 27 times 27 so there's 729 possibilities for what could have come in"}, {"start": 88.56, "end": 94.84, "text": " the context. If we take three characters as the context suddenly we have 20"}, {"start": 94.84, "end": 100.4, "text": " thousand possibilities of context and so there's just way too many rows of this"}, {"start": 100.4, "end": 105.84, "text": " matrix it's way too few counts for each possibility and the whole thing just"}, {"start": 105.84, "end": 110.32000000000001, "text": " kind of explodes and doesn't work very well. So that's why today we're going to"}, {"start": 110.32, "end": 113.88, "text": " move on to this bullet point here and we're going to implement a multi-layer"}, {"start": 113.88, "end": 119.75999999999999, "text": " perceptron model to predict the next character in a sequence and this modeling"}, {"start": 119.75999999999999, "end": 124.91999999999999, "text": " approach that we're going to adopt follows this paper Benjue et al. 2003 so I have"}, {"start": 124.91999999999999, "end": 129.0, "text": " the paper pulled up here. Now this isn't the very first paper that proposed the"}, {"start": 129.0, "end": 132.6, "text": " use of multi-layer perceptrons or neural networks to predict the next"}, {"start": 132.6, "end": 136.84, "text": " character or token in a sequence but it's definitely one that is was very"}, {"start": 136.84, "end": 140.28, "text": " influential around that time it is very often cited to stand in for this"}, {"start": 140.28, "end": 144.08, "text": " idea and I think it's a very nice write-up and so this is the paper that we're"}, {"start": 144.08, "end": 148.92000000000002, "text": " going to first look at and then implement. Now this paper has 19 pages so we don't"}, {"start": 148.92000000000002, "end": 152.68, "text": " have time to go into the full detail of this paper but I invite you to read it"}, {"start": 152.68, "end": 156.12, "text": " it's very readable interesting and has a lot of interesting ideas in it as"}, {"start": 156.12, "end": 159.68, "text": " well. In the introduction they described the exact same problem I just"}, {"start": 159.68, "end": 164.64, "text": " described and then to address it they proposed the following model. Now keep in"}, {"start": 164.64, "end": 168.72, "text": " mind that we are building a character level language model so we're working on"}, {"start": 168.72, "end": 173.52, "text": " the level of characters. In this paper we have a vocabulary of 17,000 possible"}, {"start": 173.52, "end": 177.64, "text": " words and they instead build a word level language model but we're going to"}, {"start": 177.64, "end": 181.64, "text": " still stick with the characters but we'll take the same modeling approach. Now"}, {"start": 181.64, "end": 186.24, "text": " what they do is basically they propose to take every one of these words 17,000"}, {"start": 186.24, "end": 191.52, "text": " words and they're going to associate to each word a say 30-dimensional feature"}, {"start": 191.52, "end": 198.28, "text": " vector. So every word is now embedded into a 30-dimensional space you can think"}, {"start": 198.28, "end": 203.64000000000001, "text": " of it that way. So we have 17,000 points or vectors in a 30-dimensional space and"}, {"start": 203.64000000000001, "end": 207.48, "text": " that's you might imagine that's very crowded that's a lot of points for a"}, {"start": 207.48, "end": 211.32, "text": " very small space. Now in the beginning these words are"}, {"start": 211.32, "end": 215.52, "text": " initialized completely randomly so there's pride out that random but then we're"}, {"start": 215.52, "end": 220.36, "text": " going to tune these embeddings of these words using that propagation. So during"}, {"start": 220.36, "end": 223.48, "text": " the course of training of this neural network these points or vectors are"}, {"start": 223.48, "end": 227.4, "text": " going to basically move around in this space and you might imagine that for example"}, {"start": 227.4, "end": 231.24, "text": " words that have very similar meanings or there are indeed synonyms of each"}, {"start": 231.24, "end": 235.16, "text": " other might end up in a very similar part of the space and conversely words"}, {"start": 235.16, "end": 239.96, "text": " that mean very different things would go somewhere else in the space. Now their"}, {"start": 239.96, "end": 244.0, "text": " modeling approach otherwise is identical to ours. They are using a multi-linear"}, {"start": 244.0, "end": 248.32, "text": " neural network to predict the next word given the previous words and to train"}, {"start": 248.32, "end": 251.12, "text": " the neural network they are maximizing the log-black limit of the training"}, {"start": 251.12, "end": 256.32, "text": " data just like we did. So the modeling approach itself is identical. Now here they"}, {"start": 256.32, "end": 261.48, "text": " have a concrete example of this intuition. Why does it work? Basically suppose that"}, {"start": 261.48, "end": 266.32, "text": " for example you are trying to predict a dog was running in a blank. Now suppose"}, {"start": 266.32, "end": 271.15999999999997, "text": " that the exact phrase a dog was running in a has never occurred in a training"}, {"start": 271.15999999999997, "end": 275.52, "text": " data and here you are at sort of test time later when the model is deployed"}, {"start": 275.52, "end": 280.08, "text": " somewhere and it's trying to make a sentence and it's saying dog was running in"}, {"start": 280.08, "end": 284.68, "text": " a blank and because it's never encountered this exact phrase in the training"}, {"start": 284.68, "end": 288.96, "text": " set you're out of distribution as we say. Like you don't have fundamentally any"}, {"start": 288.96, "end": 295.96, "text": " reason to suspect what might come next but this approach actually allows you to"}, {"start": 295.96, "end": 299.44, "text": " get around that because maybe you didn't see the exact phrase a dog was running"}, {"start": 299.44, "end": 303.24, "text": " in a something but maybe you've seen similar phrases maybe you've seen the"}, {"start": 303.24, "end": 307.88, "text": " phrase the dog was running in a blank and maybe your network has learned that a"}, {"start": 307.88, "end": 312.56, "text": " and the are like frequently are interchangeable with each other and so maybe it"}, {"start": 312.56, "end": 316.52, "text": " took the embedding for a and the embedding for the and it actually put them"}, {"start": 316.52, "end": 320.52, "text": " like nearby each other in the space and so you can transfer knowledge through"}, {"start": 320.52, "end": 324.68, "text": " that embedding and you can generalize in that way. Similarly the network could"}, {"start": 324.68, "end": 328.72, "text": " know that cats and dogs are animals and they co-occur in lots of very similar"}, {"start": 328.72, "end": 333.32, "text": " contexts and so even though you haven't seen this exact phrase or if you haven't"}, {"start": 333.32, "end": 338.12, "text": " seen exactly walking or running you can through the embedding space transfer"}, {"start": 338.12, "end": 343.28000000000003, "text": " knowledge and you can generalize to novel scenarios. So let's now scroll down to"}, {"start": 343.28000000000003, "end": 348.08, "text": " the diagram of the neural network they have a nice diagram here and in this"}, {"start": 348.08, "end": 352.88, "text": " example we are taking three previous words and we are trying to predict the"}, {"start": 352.88, "end": 359.2, "text": " fourth word in a sequence. Now these three previous words as I mentioned we have"}, {"start": 359.2, "end": 366.36, "text": " a vocabulary of 17,000 possible words so every one of these basically are the"}, {"start": 366.36, "end": 372.72, "text": " index of the incoming word and because there are 17,000 words this is an integer"}, {"start": 372.72, "end": 381.28000000000003, "text": " between 0 and 16,999. Now there's also a lookup table that they call C. This"}, {"start": 381.28000000000003, "end": 386.88, "text": " lookup table is a matrix that is 17,000 by say 30 and basically what we're"}, {"start": 386.88, "end": 391.44, "text": " doing here is we're treating this as a lookup table and so every index is"}, {"start": 391.44, "end": 397.12, "text": " plucking out a row of this embedding matrix so that each index is converted"}, {"start": 397.12, "end": 401.4, "text": " to the 30-dimensional vector that corresponds to the embedding vector for that"}, {"start": 401.4, "end": 408.32, "text": " word. So here we have the input layer of 30 neurons for three words making up"}, {"start": 408.32, "end": 413.28, "text": " 90 neurons in total and here they're saying that this matrix C is shared"}, {"start": 413.28, "end": 417.52, "text": " across all the words so we're always indexing it to the same matrix C over and"}, {"start": 417.52, "end": 423.88, "text": " over for each one of these words. Next up is the hidden layer of this neural"}, {"start": 423.88, "end": 428.32, "text": " network. The size of this hidden neural layer of this neural net is a hop"}, {"start": 428.32, "end": 431.68, "text": " parameter. So we use the word hyper parameter when it's kind of like a design"}, {"start": 431.68, "end": 435.59999999999997, "text": " choice up to the designer of the neural net and this can be as large as you'd"}, {"start": 435.59999999999997, "end": 439.76, "text": " like or as small as you'd like so for example the size could be a hundred and we"}, {"start": 439.76, "end": 443.64, "text": " are going to go over multiple choices of the size of this hidden layer and we're"}, {"start": 443.64, "end": 447.76, "text": " going to evaluate how well they work. So say there were a hundred neurons here"}, {"start": 447.76, "end": 454.24, "text": " all of them would be fully connected to the 90 words or 90 numbers that make up"}, {"start": 454.24, "end": 458.96, "text": " these three words. So this is a fully connected layer and there's a 10-inch"}, {"start": 458.96, "end": 463.8, "text": " long linearity and then there's this output layer and because our 17,000"}, {"start": 463.8, "end": 469.28, "text": " possible words that could come next this layer has 17,000 neurons and all of"}, {"start": 469.28, "end": 475.23999999999995, "text": " them are fully connected to all of these neurons in the hidden layer. So there's"}, {"start": 475.23999999999995, "end": 479.35999999999996, "text": " a lot of parameters here because there's a lot of words so most computation is"}, {"start": 479.35999999999996, "end": 485.23999999999995, "text": " here. This is the expensive layer. Now there are 17,000 logits here so on top of"}, {"start": 485.23999999999995, "end": 488.71999999999997, "text": " there we have the softmax layer which we've seen in our previous video as"}, {"start": 488.71999999999997, "end": 492.84, "text": " well. So every one of these logits is expedited and then everything is"}, {"start": 492.84, "end": 497.28, "text": " normalized to sum to one so that we have a nice probability distribution for"}, {"start": 497.28, "end": 502.03999999999996, "text": " the next word in the sequence. Now of course during training we actually have"}, {"start": 502.03999999999996, "end": 507.28, "text": " the label. We have the identity of the next word in the sequence. That word or"}, {"start": 507.28, "end": 513.4399999999999, "text": " its index is used to pluck out the probability of that word and then we are"}, {"start": 513.4399999999999, "end": 518.8399999999999, "text": " maximizing the probability of that word with respect to the parameters of this"}, {"start": 518.8399999999999, "end": 523.6, "text": " neural net. So the parameters are the weights and biases of this output layer,"}, {"start": 523.6, "end": 529.08, "text": " the weights and biases of this in the layer and the embedding lookup table C and"}, {"start": 529.08, "end": 534.28, "text": " all of that is optimized using backpropagation and these dashed arrows"}, {"start": 534.28, "end": 538.44, "text": " ignore those. That represents a variation of a neural net that we are not going"}, {"start": 538.44, "end": 543.0400000000001, "text": " to explore in this video. So that's the setup and now let's implement it. Okay so I"}, {"start": 543.0400000000001, "end": 547.8000000000001, "text": " started a brand new notebook for this lecture. We are importing by torch and we"}, {"start": 547.8000000000001, "end": 552.0, "text": " are importing matplotlibs so we can create figures. Then I am reading all the"}, {"start": 552.0, "end": 556.32, "text": " names into a list of words like I did before and I'm showing the first eight"}, {"start": 556.32, "end": 561.88, "text": " right here. Keep in mind that we have a 32,000 in total. These are just the first"}, {"start": 561.88, "end": 565.8, "text": " eight and then here I'm building out the vocabulary of characters and all the"}, {"start": 565.8, "end": 571.76, "text": " mappings from the characters as strings to integers and vice versa. Now the"}, {"start": 571.76, "end": 574.76, "text": " first thing we want to do is we want to compile the dataset for the neural"}, {"start": 574.76, "end": 579.12, "text": " network and I had to rewrite this code. I'll show you in a second what it looks"}, {"start": 579.12, "end": 584.8, "text": " like. So this is the code that I created for the dataset creation so let me first"}, {"start": 584.8, "end": 589.6, "text": " run it and then I'll briefly explain how this works. So first we're going to"}, {"start": 589.6, "end": 593.92, "text": " define something called block size and this is basically the context length of"}, {"start": 593.92, "end": 598.04, "text": " how many characters do we take to predict the next one. So here in this example"}, {"start": 598.04, "end": 601.96, "text": " we're taking three characters to predict the fourth one so we have a block size"}, {"start": 601.96, "end": 607.2, "text": " of three. That's the size of the block that supports the prediction. Then here"}, {"start": 607.2, "end": 613.08, "text": " I'm building out the x and y. The x are the input to the neural net and the y"}, {"start": 613.08, "end": 619.44, "text": " are the labels for each example inside x. Then I'm area over the first five"}, {"start": 619.44, "end": 623.32, "text": " words. I'm doing first five just four efficiency while we are developing all"}, {"start": 623.32, "end": 627.0, "text": " the code but then later we're going to come here and erase this so that we use"}, {"start": 627.0, "end": 632.76, "text": " the entire training set. So here I'm printing the word m up and here I'm"}, {"start": 632.76, "end": 636.8000000000001, "text": " basically showing the examples that we can generate the five examples that we"}, {"start": 636.8, "end": 643.04, "text": " can generate out of the single sort of word m up. So when we are given the"}, {"start": 643.04, "end": 648.12, "text": " context of just dot dot dot the first character in a sequence is E in this"}, {"start": 648.12, "end": 654.8399999999999, "text": " context the label SM when the context is this the label SM and so forth. And so"}, {"start": 654.8399999999999, "end": 658.0799999999999, "text": " the way I build this out is first I start with a padded context of just zero"}, {"start": 658.0799999999999, "end": 663.5999999999999, "text": " tokens. Then I iterate over all the characters I get the character in the"}, {"start": 663.6, "end": 668.88, "text": " sequence and I basically build out the array y of this current character and the"}, {"start": 668.88, "end": 673.16, "text": " array x which stores the current running context. And then here see I print"}, {"start": 673.16, "end": 678.08, "text": " everything and here I crop the context and enter the new character in a"}, {"start": 678.08, "end": 683.48, "text": " sequence. So this is kind of like a roll in the window of context. Now we can change"}, {"start": 683.48, "end": 687.36, "text": " the block size here to for example four. And in that case we would be predicting"}, {"start": 687.36, "end": 692.44, "text": " the fifth character given the previous four or it can be five and then it would"}, {"start": 692.44, "end": 698.0400000000001, "text": " look like this or it can be say 10 and then it would look something like this."}, {"start": 698.0400000000001, "end": 702.12, "text": " We're taking 10 characters to predict the 11th one and we're always padding"}, {"start": 702.12, "end": 707.9200000000001, "text": " with dots. So let me bring this back to three just so that we have what we have"}, {"start": 707.9200000000001, "end": 713.84, "text": " here in the paper. And finally the data set right now looks as follows. From"}, {"start": 713.84, "end": 719.2, "text": " these five words we have created a data set of 32 examples and each input"}, {"start": 719.2, "end": 723.0, "text": " is a neural net is three integers and we have a label that is also an integer"}, {"start": 723.0, "end": 730.32, "text": " y. So x looks like this. These are the individual examples and then y are the"}, {"start": 730.32, "end": 738.1600000000001, "text": " labels. So given this let's now write a neural network that takes these x's"}, {"start": 738.1600000000001, "end": 743.88, "text": " and predicts to y's. First let's build the embedding lookup table C. So we have"}, {"start": 743.88, "end": 747.48, "text": " 27 possible characters and we're going to embed them in a lower dimensional"}, {"start": 747.48, "end": 753.64, "text": " space. In the paper they have 17,000 words and they embed them in spaces as"}, {"start": 753.64, "end": 760.04, "text": " small dimensional as 30. So they cram 17,000 words into 30 dimensional space."}, {"start": 760.04, "end": 764.44, "text": " In our case we have only 27 possible characters. So let's cram them in"}, {"start": 764.44, "end": 769.04, "text": " something as small as to start with for example a two dimensional space. So this"}, {"start": 769.04, "end": 774.52, "text": " lookup table will be random numbers and we'll have 27 rows and we'll have two"}, {"start": 774.52, "end": 780.4, "text": " columns. Right so each 20 each one of 27 characters will have a two-dimensional"}, {"start": 780.4, "end": 786.0799999999999, "text": " embedding. So that's our matrix C of embeddings in the beginning"}, {"start": 786.0799999999999, "end": 791.0, "text": " initialized randomly. Now before we embed all of the integers inside the input"}, {"start": 791.0, "end": 796.4399999999999, "text": " x using this lookup table C let me actually just try to embed a single"}, {"start": 796.4399999999999, "end": 803.12, "text": " individual integer like say five. So we get a sense of how this works. Now one"}, {"start": 803.12, "end": 806.96, "text": " way this works of course is we can just take the C and we can index into row five"}, {"start": 806.96, "end": 815.08, "text": " and that gives us a vector the fifth row of C and this is one way to do it. The"}, {"start": 815.08, "end": 818.76, "text": " other way that I presented in the previous lecture is actually seemingly"}, {"start": 818.76, "end": 822.44, "text": " different but actually identical. So in the previous lecture what we did is we"}, {"start": 822.44, "end": 827.24, "text": " took these integers and we used the one-hot encoding to first encode them. So"}, {"start": 827.24, "end": 832.04, "text": " if that one hot we want to encode integer five and we want to tell it that"}, {"start": 832.04, "end": 836.4, "text": " their number of classes is 27. So that's the 26-dimensional vector of all"}, {"start": 836.4, "end": 843.48, "text": " zeros except the fifth bit is turned on. Now this actually doesn't work. The"}, {"start": 843.48, "end": 848.64, "text": " reason is that this input actually must be a two-shot tensor. And I'm making"}, {"start": 848.64, "end": 851.64, "text": " some of these errors intentionally just so you get to see some errors and how to"}, {"start": 851.64, "end": 856.52, "text": " fix them. So this must be a tensor not an int, fairly straightforward to fix. We"}, {"start": 856.52, "end": 861.12, "text": " get a one-hot vector. The fifth dimension is one and the shape of this is 27."}, {"start": 861.12, "end": 866.88, "text": " And now notice that just as I briefly alluded to in a previous video if we take"}, {"start": 866.88, "end": 876.64, "text": " this one-hot vector and we multiply it by C then what would you expect?"}, {"start": 876.64, "end": 884.72, "text": " Well number one first you'd expect an error because expected scalar type"}, {"start": 884.72, "end": 889.76, "text": " long but found float. So a little bit confusing but the problem here is that one"}, {"start": 889.76, "end": 897.04, "text": " hot the data type of it is long. It's a 64-bit integer but this is a float"}, {"start": 897.04, "end": 902.04, "text": " tensor. And so PyTorch doesn't know how to multiply an int with a float and that's"}, {"start": 902.04, "end": 907.2, "text": " why we had to explicitly cast this to a float so that we can multiply. Now the"}, {"start": 907.2, "end": 913.2, "text": " output actually here is identical and that it's identical because of the way the"}, {"start": 913.2, "end": 918.4399999999999, "text": " matrix multiplication here works. We have the one-hot vector multiplying columns"}, {"start": 918.44, "end": 923.8000000000001, "text": " of C and because of all the zeros they actually end up masking out everything in"}, {"start": 923.8000000000001, "end": 928.72, "text": " C except for the fifth row which is blocked out. And so we actually arrive at the"}, {"start": 928.72, "end": 934.12, "text": " same result and that tells you that here we can interpret this first piece here"}, {"start": 934.12, "end": 938.24, "text": " this embedding of the integer. We can either think of it as the integer indexing"}, {"start": 938.24, "end": 942.6800000000001, "text": " into a lookup table C but equivalently we can also think of this little piece"}, {"start": 942.68, "end": 948.68, "text": " here as a first layer of this bigger neural net. This layer here has neurons that"}, {"start": 948.68, "end": 952.7199999999999, "text": " have no nonlinearity there's no 10H there are just linear neurons and their"}, {"start": 952.7199999999999, "end": 958.9599999999999, "text": " wake matrix is C. And then we are encoding integers into one hot and feeding"}, {"start": 958.9599999999999, "end": 963.16, "text": " those into a neural net and this first layer basically embeds them. So those"}, {"start": 963.16, "end": 966.5999999999999, "text": " are two equivalent ways of doing the same thing. We're just going to index"}, {"start": 966.5999999999999, "end": 970.28, "text": " because it's much much faster and we're going to discard this interpretation of"}, {"start": 970.28, "end": 975.28, "text": " one-hot inputs into neural nets and we're just going to index integers and"}, {"start": 975.28, "end": 979.64, "text": " create and use embedding tables. Now embedding a single integer like five is"}, {"start": 979.64, "end": 985.16, "text": " easy enough. We can simply ask by torch to retrieve the fifth row of C or the"}, {"start": 985.16, "end": 991.28, "text": " row index five of C. But how do we simultaneously embed all of these 32 by"}, {"start": 991.28, "end": 997.04, "text": " three integers stored in array X? Wattly by torch indexing is fairly flexible and"}, {"start": 997.04, "end": 1003.8399999999999, "text": " quite powerful. So it doesn't just work to ask for a single element five like"}, {"start": 1003.8399999999999, "end": 1008.3199999999999, "text": " this. You can actually index using lists. So for example we can get the rows five"}, {"start": 1008.3199999999999, "end": 1014.0799999999999, "text": " six and seven and this will just work like this. We can index with a list. It"}, {"start": 1014.0799999999999, "end": 1017.9599999999999, "text": " doesn't just have to be a list it can also be a actually a tensor of integers."}, {"start": 1017.9599999999999, "end": 1023.8399999999999, "text": " And we can index with that. So this is a integer tensor five six seven and this"}, {"start": 1023.84, "end": 1029.4, "text": " will just work as well. In fact we can also for example repeat row seven and"}, {"start": 1029.4, "end": 1034.72, "text": " retrieve it multiple times and that same index will just get embedded multiple"}, {"start": 1034.72, "end": 1040.32, "text": " times here. So here we are indexing with a one-dimensional tensor of integers."}, {"start": 1040.32, "end": 1044.52, "text": " But it turns out that you can also index with multi-dimensional tensors of"}, {"start": 1044.52, "end": 1049.2, "text": " integers. Here we have a two-dimensional tensor of integers. So we can"}, {"start": 1049.2, "end": 1058.88, "text": " simply just do C at X and this just works. And the shape of this is 32 by 3 which"}, {"start": 1058.88, "end": 1061.8400000000001, "text": " is the original shape. And now for every one of those three two by three"}, {"start": 1061.8400000000001, "end": 1067.8, "text": " integers we've retrieved the embedding vector here. So basically we have that"}, {"start": 1067.8, "end": 1076.56, "text": " as an example the 13th or example index 13 the second dimension is the integer"}, {"start": 1076.56, "end": 1083.48, "text": " one as an example. And so here if we do C of X which gives us that array and"}, {"start": 1083.48, "end": 1090.9199999999998, "text": " then we index into 13 by 2 of that array then we get the embedding here. And you"}, {"start": 1090.9199999999998, "end": 1098.1599999999999, "text": " can verify that C at one which is the integer at that location is indeed equal"}, {"start": 1098.1599999999999, "end": 1103.6399999999999, "text": " to this. You see they're equal. So basically a long story short PyTorch"}, {"start": 1103.64, "end": 1109.8400000000001, "text": " indexing is awesome and to embed simultaneously all of the integers in X we"}, {"start": 1109.8400000000001, "end": 1115.5600000000002, "text": " can simply do C of X and that is our embedding and that just works. Now let's"}, {"start": 1115.5600000000002, "end": 1121.76, "text": " construct this layer here the hidden layer. So we have that W1 as I'll call it"}, {"start": 1121.76, "end": 1127.3600000000001, "text": " are these weights which we will initialize randomly. Now the number of inputs"}, {"start": 1127.3600000000001, "end": 1131.68, "text": " to this layer is going to be three times two right because we have two"}, {"start": 1131.68, "end": 1135.44, "text": " dimensional embeddings and we have three of them. So the number of inputs is six"}, {"start": 1135.44, "end": 1141.28, "text": " and the number of neurons in this layer is a variable up to us. Let's use 100"}, {"start": 1141.28, "end": 1146.72, "text": " neurons as an example and then biases will be also initialized randomly as an"}, {"start": 1146.72, "end": 1153.3600000000001, "text": " example and let's and we just need 100 of them. Now the problem with this is we"}, {"start": 1153.3600000000001, "end": 1157.44, "text": " can't simply normally we would take the input in this case that's embedding and"}, {"start": 1157.44, "end": 1161.96, "text": " we'd like to multiply it with these weights and then we would like to add the"}, {"start": 1161.96, "end": 1166.0, "text": " bias. This is roughly what we want to do but the problem here is that these"}, {"start": 1166.0, "end": 1170.64, "text": " embeddings are stacked up in the dimensions of this impotenture. So this will"}, {"start": 1170.64, "end": 1174.72, "text": " not work this matrix multiplication because this is a shape 32 by 3 by 2 and I"}, {"start": 1174.72, "end": 1179.6000000000001, "text": " can't multiply that by 6 by 100. So somehow we need to concatenate these"}, {"start": 1179.6000000000001, "end": 1183.28, "text": " inputs here together so that we can do something along these lines which"}, {"start": 1183.28, "end": 1189.08, "text": " currently does not work. So how do we transform this 32 by 3 by 2 into a 32 by"}, {"start": 1189.08, "end": 1194.76, "text": " 6 so that we can actually perform this multiplication over here. I'd like to"}, {"start": 1194.76, "end": 1199.36, "text": " show you that there are usually many ways of implementing what you'd like to"}, {"start": 1199.36, "end": 1204.28, "text": " do in Torch and some of them will be faster, better, shorter, etc. And that's"}, {"start": 1204.28, "end": 1208.48, "text": " because Torch is a very large library and it's got lots and lots of functions."}, {"start": 1208.48, "end": 1212.44, "text": " So if we just go to the documentation and click on Torch you'll see that my"}, {"start": 1212.44, "end": 1216.04, "text": " slider here is very tiny and that's because there are so many functions that"}, {"start": 1216.04, "end": 1220.4, "text": " you can call on these tensors to transform them, create them, multiply them,"}, {"start": 1220.4, "end": 1226.1200000000001, "text": " add them, perform all kinds of different operations on them. And so this is"}, {"start": 1226.1200000000001, "end": 1232.16, "text": " kind of like the space of possibility if you will. Now one of the things that you"}, {"start": 1232.16, "end": 1236.44, "text": " can do is we can control here, control off for concatenate and we see that"}, {"start": 1236.44, "end": 1241.6000000000001, "text": " there's a function torqued.cat, short for concatenate. And this concatenate is"}, {"start": 1241.6, "end": 1246.6799999999998, "text": " given sequence of tensors in a given dimension and these tensors must have the"}, {"start": 1246.6799999999998, "end": 1251.28, "text": " same shape, etc. So we can use the concatenate operation to in a naive way"}, {"start": 1251.28, "end": 1257.04, "text": " concatenate these three embeddings for each input. So in this case we have"}, {"start": 1257.04, "end": 1262.84, "text": " m of m of the shape. And really what we want to do is we want to retrieve these"}, {"start": 1262.84, "end": 1269.12, "text": " three parts and concatenate them. So we want to grab all the examples. We want to"}, {"start": 1269.12, "end": 1281.52, "text": " grab first the zero index and then all of this. So this plugs out the 32 by"}, {"start": 1281.52, "end": 1288.9199999999998, "text": " two embeddings of just the first word here. And so basically we want this guy. We"}, {"start": 1288.9199999999998, "end": 1293.3999999999999, "text": " want the first dimension and we want the second dimension. And these are the"}, {"start": 1293.3999999999999, "end": 1298.9599999999998, "text": " three pieces individually. And then we want to treat this as a sequence and we"}, {"start": 1298.96, "end": 1305.1200000000001, "text": " want to torqued.cat on that sequence. So this is the list torqued.cat takes a"}, {"start": 1305.1200000000001, "end": 1310.3600000000001, "text": " sequence of tensors. And then we have to tell it along which dimension to concatenate."}, {"start": 1310.3600000000001, "end": 1315.4, "text": " So in this case all these are 32 by two and we want to concatenate not across"}, {"start": 1315.4, "end": 1322.1200000000001, "text": " dimension zero but across dimension one. So passing in one gives us a result that"}, {"start": 1322.1200000000001, "end": 1326.88, "text": " the shape of this is 32 by six exactly as we'd like. So that basically took 32"}, {"start": 1326.88, "end": 1332.3600000000001, "text": " and squashed these back and concatenate them into 32 by six. Now this is kind"}, {"start": 1332.3600000000001, "end": 1336.48, "text": " of ugly because this code would not generalize if we want to later change the"}, {"start": 1336.48, "end": 1341.5600000000002, "text": " block size. Right now we have three inputs three words. But what if we had five"}, {"start": 1341.5600000000002, "end": 1346.2, "text": " then here we would have to change the code because I'm indexing directly. Well"}, {"start": 1346.2, "end": 1350.0800000000002, "text": " torqued comes to rescue again because that turns out to be a function called"}, {"start": 1350.08, "end": 1357.36, "text": " unbind and it removes a tensor dimension. So removes a tensor dimension returns a"}, {"start": 1357.36, "end": 1362.9199999999998, "text": " tuple of all slices along the given dimension without it. So this is exactly what"}, {"start": 1362.9199999999998, "end": 1372.48, "text": " we need. And basically when we call tors.unbind tors.unbind of m and passing"}, {"start": 1372.48, "end": 1381.28, "text": " dimension one index one. This gives us a list of a list of tensors exactly"}, {"start": 1381.28, "end": 1388.6, "text": " equivalent to this. So running this gives us a line three and it's exactly this"}, {"start": 1388.6, "end": 1394.2, "text": " list. And so we can call torched out cat on it and along the first dimension."}, {"start": 1394.2, "end": 1401.28, "text": " And this works and this shape is the same. But now this is it doesn't matter if"}, {"start": 1401.28, "end": 1405.84, "text": " we have block size three or five or ten this will just work. So this is one way"}, {"start": 1405.84, "end": 1409.76, "text": " to do it. But it turns out that in this case there's actually a significantly"}, {"start": 1409.76, "end": 1413.68, "text": " better and more efficient way. And this gives me an opportunity to hint at some"}, {"start": 1413.68, "end": 1420.92, "text": " of the internals of torched out tensor. So let's create an array here of elements"}, {"start": 1420.92, "end": 1426.6, "text": " from zero to 17. And the shape of this is just 18. It's a single picture of 18"}, {"start": 1426.6, "end": 1432.4399999999998, "text": " numbers. It turns out that we can very quickly we represent this as different"}, {"start": 1432.4399999999998, "end": 1438.3999999999999, "text": " sized and dimensional tensors. We do this by calling a view. And we can say that"}, {"start": 1438.3999999999999, "end": 1444.48, "text": " actually this is not a single vector of 18. This is a two by nine tensor. Or"}, {"start": 1444.48, "end": 1450.04, "text": " alternatively this is a nine by two tensor. Or this is actually a three by three"}, {"start": 1450.04, "end": 1455.52, "text": " by two tensor. As long as the total number of elements here multiply to be the"}, {"start": 1455.52, "end": 1462.36, "text": " same this will just work. And in PyTorch this operation calling that view is"}, {"start": 1462.36, "end": 1467.36, "text": " extremely efficient. And the reason for that is that in each tensor there's"}, {"start": 1467.36, "end": 1472.68, "text": " something called the underlying storage. And the storage is just the numbers"}, {"start": 1472.68, "end": 1477.28, "text": " always as a one dimensional vector. And this is how this tensor has represented"}, {"start": 1477.28, "end": 1482.8, "text": " in the computer memory. It's always a one dimensional vector. But when we call"}, {"start": 1482.8, "end": 1488.28, "text": " that view we are manipulating some of attributes of that tensor that dictate"}, {"start": 1488.28, "end": 1492.48, "text": " how this one dimensional sequence is interpreted to be an end-dimensional"}, {"start": 1492.48, "end": 1497.0, "text": " tensor. And so what's happening here is that no memory is being changed, copied,"}, {"start": 1497.0, "end": 1502.24, "text": " moved, or created when we call that view. The storage is identical. But when you"}, {"start": 1502.24, "end": 1508.04, "text": " call that view some of the internal attributes of the view of this tensor are"}, {"start": 1508.04, "end": 1511.0, "text": " being manipulated and changed. In particular that's something there's something"}, {"start": 1511.0, "end": 1515.52, "text": " called storage offset, strides, and shapes. And those are manipulated so that"}, {"start": 1515.52, "end": 1519.2, "text": " this one dimensional sequence of bytes is seen as different and dimensional"}, {"start": 1519.2, "end": 1525.28, "text": " arrays. There's a blog post here from Eric called PyTorch internals where he"}, {"start": 1525.28, "end": 1529.24, "text": " goes into some of this with respect to tensor and how the view of a tensor is"}, {"start": 1529.24, "end": 1534.08, "text": " represented. And this is really just like a logical construct of representing"}, {"start": 1534.08, "end": 1539.28, "text": " the physical memory. And so this is a pretty good blog post that you can go into."}, {"start": 1539.28, "end": 1542.92, "text": " I might also create an entire video on the internals of Torch tensor and how"}, {"start": 1542.92, "end": 1547.0, "text": " this works. For here we just note that this is an extremely efficient"}, {"start": 1547.0, "end": 1554.28, "text": " operation. And if I delete this and come back to our end we see that the shape of"}, {"start": 1554.28, "end": 1559.8, "text": " our end is 3 2 by 3 by 2. But we can simply ask for PyTorch to view this"}, {"start": 1559.8, "end": 1566.84, "text": " instead as a 3 2 by 6. And the way that gets flattened into a 3 2 by 6 array"}, {"start": 1566.84, "end": 1574.28, "text": " just happens that these two get stacked up in a single row. And so that's"}, {"start": 1574.28, "end": 1578.24, "text": " basically the concatenation operation that we're after. And you can verify that"}, {"start": 1578.24, "end": 1582.9199999999998, "text": " this actually gives the exact same result as what we had before. So this is an"}, {"start": 1582.9199999999998, "end": 1586.24, "text": " element y equals and you can see that all the elements of these two tensors are"}, {"start": 1586.24, "end": 1592.36, "text": " the same. And so we get the exact same result. So long story short we can"}, {"start": 1592.36, "end": 1600.0, "text": " actually just come here. And if we just view this as a 3 2 by 6 instead then"}, {"start": 1600.0, "end": 1604.6, "text": " this multiplication will work and give us the hidden states that were after. So"}, {"start": 1604.6, "end": 1611.1999999999998, "text": " if this is h then h dot shape is now the 100 dimensional activations for"}, {"start": 1611.1999999999998, "end": 1616.08, "text": " every one of our 32 examples. And this gives the desired result. Let me do two"}, {"start": 1616.08, "end": 1620.84, "text": " things here. Number one let's not use 32. We can for example do something like"}, {"start": 1620.84, "end": 1628.04, "text": " m dot shape at zero so that we don't hard code these numbers and this would"}, {"start": 1628.04, "end": 1632.9599999999998, "text": " work for any size of this m or alternatively we can also do negative one. When we"}, {"start": 1632.9599999999998, "end": 1637.52, "text": " do negative one, PytroTroll and Fur what this should be. Because the number of"}, {"start": 1637.52, "end": 1641.28, "text": " elements must be the same and we're saying that this is 6. PytroTroll derived"}, {"start": 1641.28, "end": 1647.28, "text": " that this must be 32 or whatever else it is if m is of different size. The other"}, {"start": 1647.28, "end": 1653.84, "text": " thing is here one more thing I'd like to point out is here when we do the"}, {"start": 1653.84, "end": 1659.76, "text": " concatenation this actually is much less efficient because this concatenation"}, {"start": 1659.76, "end": 1663.36, "text": " would create a whole new tensor with a whole new storage so new memory is being"}, {"start": 1663.36, "end": 1667.32, "text": " created because there's no way to concatenate tensors just by manipulating the"}, {"start": 1667.32, "end": 1672.8799999999999, "text": " view attributes. So this is inefficient and creates all kinds of new memory. So"}, {"start": 1672.88, "end": 1679.4, "text": " let me repeat this now. We don't need this and here to calculate H we want to"}, {"start": 1679.4, "end": 1688.0800000000002, "text": " also dot 10 H of this ticket our. Oops to get our H. So these are now numbers"}, {"start": 1688.0800000000002, "end": 1692.5600000000002, "text": " between negative one and one because of the 10 H and we have that the shape is"}, {"start": 1692.5600000000002, "end": 1697.96, "text": " 32 by 100 and that is basically this hidden layer of activations here for"}, {"start": 1697.96, "end": 1702.48, "text": " every one of our 32 examples. Now there's one more thing I've lost over that we"}, {"start": 1702.48, "end": 1706.52, "text": " have to be very careful with and that this and that's this plus here. In"}, {"start": 1706.52, "end": 1711.04, "text": " particular we want to make sure that the broadcasting will do what we like. The"}, {"start": 1711.04, "end": 1717.0, "text": " shape of this is 32 by 100 and the one's shape is 100. So we see that the"}, {"start": 1717.0, "end": 1721.3600000000001, "text": " addition here will broadcast these two and in particular we have 32 by 100"}, {"start": 1721.3600000000001, "end": 1727.96, "text": " broadcasting to 100. So broadcasting will align on the right create a fake"}, {"start": 1727.96, "end": 1732.52, "text": " dimension here. So this will become a one by 100 row vector and then it will"}, {"start": 1732.52, "end": 1737.96, "text": " copy vertically for every one of these rows of 32 and do an element wise"}, {"start": 1737.96, "end": 1741.76, "text": " addition. So in this case the correct thing will be happening because the"}, {"start": 1741.76, "end": 1748.56, "text": " same bias vector will be added to all the rows of this matrix. So that is"}, {"start": 1748.56, "end": 1752.32, "text": " correct. That's what we'd like and it's always good practice just make sure"}, {"start": 1752.32, "end": 1756.0, "text": " so that you don't treat yourself in the foot. And finally let's create the"}, {"start": 1756.0, "end": 1765.4, "text": " final layer here. So let's create W2 and V2. The input now is 100 and the"}, {"start": 1765.4, "end": 1769.76, "text": " output number of neurons will be for us 27 because we have 27 possible"}, {"start": 1769.76, "end": 1775.68, "text": " characters that come next. So the biases will be 27 as well. So therefore the"}, {"start": 1775.68, "end": 1782.08, "text": " low jits which are the outputs of this neural net are going to be H"}, {"start": 1782.08, "end": 1790.96, "text": " multiplied by W2 plus B2. Loads is that shape is 32 by 27 and the"}, {"start": 1790.96, "end": 1795.52, "text": " low jits look good. Now exactly as we saw in the previous video we want to"}, {"start": 1795.52, "end": 1799.12, "text": " take these low jits and we want to first experiment shape them to get our fake"}, {"start": 1799.12, "end": 1804.48, "text": " counts and then we want to normalize them into a probability. So prob is counts"}, {"start": 1804.48, "end": 1811.36, "text": " divide and now counts that sum along the first dimension and keep them"}, {"start": 1811.36, "end": 1818.08, "text": " as true exactly as in the previous video. And so prob that shape now is the"}, {"start": 1818.08, "end": 1825.6399999999999, "text": " R2 by 27 and you'll see that every row of prob sums to one so it's normalized."}, {"start": 1825.6399999999999, "end": 1830.4399999999998, "text": " So that gives us the probabilities. Now of course we have the actual letter that"}, {"start": 1830.4399999999998, "end": 1836.4799999999998, "text": " comes next and that comes from this array why which we created during the"}, {"start": 1836.4799999999998, "end": 1840.3999999999999, "text": " data separation. So why is this last piece here which is the"}, {"start": 1840.4, "end": 1844.16, "text": " unethically of the next character in a sequence that we'd like to now predict."}, {"start": 1844.16, "end": 1848.16, "text": " So what we'd like to do now is just as in the previous video we'd like to"}, {"start": 1848.16, "end": 1853.1200000000001, "text": " index into the rows of prob and each row we'd like to pluck out the probability"}, {"start": 1853.1200000000001, "end": 1858.76, "text": " assigned to the correct character as given here. So first we have torshtot"}, {"start": 1858.76, "end": 1865.92, "text": " range of 32 which is kind of like an iterator over numbers from 0 to 31 and"}, {"start": 1865.92, "end": 1870.8400000000001, "text": " then we can index into prob in the following way. Prob in torshtot"}, {"start": 1870.8400000000001, "end": 1876.0, "text": " range of 32 which it erased the roads and then each row we'd like to grab this"}, {"start": 1876.0, "end": 1882.0800000000002, "text": " column as given by why. So this gives the current probabilities as assigned by"}, {"start": 1882.0800000000002, "end": 1885.92, "text": " this neural network with this setting of its weights to the correct"}, {"start": 1885.92, "end": 1890.28, "text": " character in the sequence. And you can see here that this looks okay for some"}, {"start": 1890.28, "end": 1894.0800000000002, "text": " of these characters like this is basically point two but it doesn't look very"}, {"start": 1894.08, "end": 1900.4399999999998, "text": " good at all for many other characters. Like this is 0.0701 probability and so the"}, {"start": 1900.4399999999998, "end": 1903.6799999999998, "text": " network thinks that some of these are extremely unlikely but of course we"}, {"start": 1903.6799999999998, "end": 1909.1999999999998, "text": " haven't trained the neural network yet. So this will improve and ideally all of"}, {"start": 1909.1999999999998, "end": 1912.52, "text": " these numbers here of course are one because then we are correctly predicting"}, {"start": 1912.52, "end": 1916.24, "text": " the next character. Now just as in the previous video we want to take these"}, {"start": 1916.24, "end": 1920.28, "text": " probabilities. We want to look at the lock probability and then we want to look"}, {"start": 1920.28, "end": 1925.08, "text": " at the average rock probability and the negative of it to create the negative"}, {"start": 1925.08, "end": 1931.8, "text": " log likelihood loss. So the loss here is 17 and this is the loss that we'd like"}, {"start": 1931.8, "end": 1936.6, "text": " to minimize to get the network to predict the correct character in the sequence."}, {"start": 1936.6, "end": 1941.0, "text": " Okay so I rewrote everything here and made it a bit more respectable. So here's"}, {"start": 1941.0, "end": 1945.76, "text": " our data set. Here's all the parameters that we defined. I'm now using a"}, {"start": 1945.76, "end": 1949.8799999999999, "text": " generator to make it reproducible. I clustered all the primers into a single"}, {"start": 1949.88, "end": 1953.96, "text": " list of primers so that for example it's easy to count them and see that in"}, {"start": 1953.96, "end": 1958.5600000000002, "text": " total we currently have about 3,400 primers and this is the forward pass as we"}, {"start": 1958.5600000000002, "end": 1963.64, "text": " developed it and we arrive at a single number here the loss that is currently"}, {"start": 1963.64, "end": 1967.7600000000002, "text": " expressing how well this neural network works with the current setting of"}, {"start": 1967.7600000000002, "end": 1972.0400000000002, "text": " primers. Now I would like to make it even more respectable. So in particular"}, {"start": 1972.0400000000002, "end": 1977.8000000000002, "text": " see these lines here where we take the logits and we calculate a loss. We're"}, {"start": 1977.8, "end": 1983.1599999999999, "text": " not actually reinventing the wheel here. This is just classification and many"}, {"start": 1983.1599999999999, "end": 1987.1599999999999, "text": " people use classification and that's why there is a functional dot cross entropy"}, {"start": 1987.1599999999999, "end": 1991.48, "text": " function in PyTorch to calculate this much more efficiently. So we could just"}, {"start": 1991.48, "end": 1995.48, "text": " simply call f dot cross entropy and we can pass in the logits and we can pass in"}, {"start": 1995.48, "end": 2003.08, "text": " the array of targets. Why? And this calculates the exact same loss. So in fact we"}, {"start": 2003.08, "end": 2008.36, "text": " can simply put this here and erase these three lines and we're going to get the"}, {"start": 2008.36, "end": 2012.32, "text": " exact same result. Now there are actually many good reasons to prefer f dot"}, {"start": 2012.32, "end": 2016.6, "text": " cross entropy over rolling your own implementation like this. I did this for"}, {"start": 2016.6, "end": 2021.1999999999998, "text": " educational reasons but you'd never use this in practice. Why is that? Number one"}, {"start": 2021.1999999999998, "end": 2025.28, "text": " when you use f dot cross entropy PyTorch will not actually create all these"}, {"start": 2025.28, "end": 2029.6399999999999, "text": " intermediate tensors because these are all new tensors in memory and all this is"}, {"start": 2029.64, "end": 2034.2800000000002, "text": " fairly inefficient to run like this. Instead PyTorch will cluster up all these"}, {"start": 2034.2800000000002, "end": 2040.0800000000002, "text": " operations and very often create fused kernels that very efficiently evaluate"}, {"start": 2040.0800000000002, "end": 2044.24, "text": " these expressions that are sort of like clustered mathematical operations."}, {"start": 2044.24, "end": 2048.48, "text": " Number two the backward pass can be made much more efficient and not just because"}, {"start": 2048.48, "end": 2053.4, "text": " it's a fused kernel but also analytically and mathematically it's much it's"}, {"start": 2053.4, "end": 2058.36, "text": " often a very much simpler backward pass to implement. We actually sell this"}, {"start": 2058.36, "end": 2063.08, "text": " with micrograd. You see here when we implemented 10h the forward pass of this"}, {"start": 2063.08, "end": 2066.8, "text": " operation to calculate the 10h was actually fairly complicated mathematical"}, {"start": 2066.8, "end": 2071.56, "text": " expression but because it's a clustered mathematical expression when we did"}, {"start": 2071.56, "end": 2075.6400000000003, "text": " the backward pass we didn't individually backward through the x and the two"}, {"start": 2075.6400000000003, "end": 2079.76, "text": " times and the minus one and division etc. We just said it's 1 minus t squared"}, {"start": 2079.76, "end": 2084.52, "text": " and that's a much simpler mathematical expression and we were able to do this"}, {"start": 2084.52, "end": 2087.56, "text": " because we're able to reuse calculations and because we are able to"}, {"start": 2087.56, "end": 2091.32, "text": " mathematically and analytically derive the derivative and often that"}, {"start": 2091.32, "end": 2095.48, "text": " expression simplifies mathematically and so there's much less to implement."}, {"start": 2095.48, "end": 2099.96, "text": " So not only can it be made more efficient because it runs in a fused kernel"}, {"start": 2099.96, "end": 2105.32, "text": " but also because the expressions can take a much simpler form mathematically."}, {"start": 2105.32, "end": 2110.7999999999997, "text": " So that's number one. Number two under the hood f dot cross entropy can also"}, {"start": 2110.7999999999997, "end": 2116.2, "text": " be significantly more numerically well behaved. Let me show you an example of"}, {"start": 2116.2, "end": 2122.16, "text": " how this works. Suppose we have a logit of negative two three negative three"}, {"start": 2122.16, "end": 2126.68, "text": " zero and five and then we are taking the exponent of it and normalizing it to"}, {"start": 2126.68, "end": 2131.3999999999996, "text": " sum to one. So when logits take on this values everything is well and good and"}, {"start": 2131.3999999999996, "end": 2135.3999999999996, "text": " we get a nice probability distribution. Now consider what happens when some of"}, {"start": 2135.3999999999996, "end": 2138.6, "text": " these logits take on more extreme values and that can happen during"}, {"start": 2138.6, "end": 2142.8799999999997, "text": " optimization of neural network. Suppose that some of these numbers grow very"}, {"start": 2142.88, "end": 2147.6800000000003, "text": " negative like say negative 100 then actually everything will come out fine."}, {"start": 2147.6800000000003, "end": 2152.8, "text": " We still get a probabilities that you know are well behaved and they sum to one"}, {"start": 2152.8, "end": 2157.52, "text": " and everything is great but because of the way the exports if you have very"}, {"start": 2157.52, "end": 2161.92, "text": " positive logits like say positive 100 in here you actually start to run into"}, {"start": 2161.92, "end": 2166.32, "text": " trouble and we get not a number here and the reason for that is that these"}, {"start": 2166.32, "end": 2174.1200000000003, "text": " counts have an inf here. So if you pass in a very negative number two pecs you just"}, {"start": 2174.1200000000003, "end": 2178.48, "text": " get a very negative, sorry not negative but very small number very near zero"}, {"start": 2178.48, "end": 2182.8, "text": " and that's fine. But if you pass in a very positive number suddenly we run out"}, {"start": 2182.8, "end": 2188.76, "text": " of range in our floating point number that represents these counts. So basically"}, {"start": 2188.76, "end": 2192.44, "text": " we're taking E and we're raising it to the power of 100 and that gives us"}, {"start": 2192.44, "end": 2196.56, "text": " inf because we run out of dynamic range on this floating point number that is"}, {"start": 2196.56, "end": 2204.48, "text": " count. And so we cannot pass very large logits through this expression. Now let me"}, {"start": 2204.48, "end": 2209.12, "text": " reset these numbers to something reasonable. The way PyTorch solved this is"}, {"start": 2209.12, "end": 2214.36, "text": " that you see how we have a really well behaved result here. It turns out that"}, {"start": 2214.36, "end": 2219.32, "text": " because of the normalization here you can actually offset logits by any arbitrary"}, {"start": 2219.32, "end": 2223.84, "text": " constant value that you want. So if I add one here you actually get the exact"}, {"start": 2223.84, "end": 2230.2400000000002, "text": " same result or if I add two or if I subtract three any offset will produce the"}, {"start": 2230.2400000000002, "end": 2235.6400000000003, "text": " exact same probabilities. So because negative numbers are okay but positive"}, {"start": 2235.6400000000003, "end": 2240.2000000000003, "text": " numbers can actually overflow this exp. What PyTorch does is it internally"}, {"start": 2240.2000000000003, "end": 2244.88, "text": " calculates the maximum value that occurs in the logits and it subtracts it. So in"}, {"start": 2244.88, "end": 2249.1600000000003, "text": " this case it would subtract five. And so therefore the greatest number in logits"}, {"start": 2249.1600000000003, "end": 2252.56, "text": " will become zero and all the other numbers will become some negative numbers."}, {"start": 2252.56, "end": 2257.44, "text": " And then the result of this is always well behaved. So even if we have 100"}, {"start": 2257.44, "end": 2263.08, "text": " here previously not good but because PyTorch will subtract 100 this will work."}, {"start": 2263.08, "end": 2269.12, "text": " And so there's many good reasons to call cross entropy. Number one the"}, {"start": 2269.12, "end": 2272.2000000000003, "text": " forward pass can be much more efficient. The backward pass can be much more"}, {"start": 2272.2, "end": 2277.0, "text": " efficient and also thinks it can be much more numerically well behaved. Okay so"}, {"start": 2277.0, "end": 2283.08, "text": " let's now set up the training of this neural net. We have the forward pass. We"}, {"start": 2283.08, "end": 2287.52, "text": " don't need these because that we have that loss is equal to half that cross"}, {"start": 2287.52, "end": 2292.4399999999996, "text": " entropy. That's the forward pass. Then we need the backward pass. First we want"}, {"start": 2292.4399999999996, "end": 2297.16, "text": " to set the gradients to be zero. So for P in parameters we want to make sure"}, {"start": 2297.16, "end": 2300.6, "text": " that P dot grad is none which is the same as setting it to zero in PyTorch. And"}, {"start": 2300.6, "end": 2305.16, "text": " then lost a backward to populate those gradients. Once we have the"}, {"start": 2305.16, "end": 2309.12, "text": " gradients we can do the parameter update. So for P in parameters we want to take"}, {"start": 2309.12, "end": 2317.64, "text": " all the dear and we want to nudge it learning rate times P dot grad. And then we"}, {"start": 2317.64, "end": 2329.4, "text": " want to repeat this a few times. And let's print the loss here as well. Now this"}, {"start": 2329.4, "end": 2332.88, "text": " once the vice and it will create an error because we also have to go for P in"}, {"start": 2332.88, "end": 2337.56, "text": " parameters. And we have to make sure that P dot requires grad is set to"}, {"start": 2337.56, "end": 2345.48, "text": " true in PyTorch. And this should just work. Okay so we started off with loss of"}, {"start": 2345.48, "end": 2351.08, "text": " 17 and we're decreasing it. Lots run longer. And you see how the loss"}, {"start": 2351.08, "end": 2360.52, "text": " decreases a lot here. So if we just run for a thousand times we get a very very"}, {"start": 2360.52, "end": 2364.2799999999997, "text": " low loss. And that means that we're making very good predictions. Now the reason"}, {"start": 2364.2799999999997, "end": 2370.12, "text": " that this is so straightforward right now is because we're only overfitting"}, {"start": 2370.12, "end": 2377.2799999999997, "text": " 32 examples. So we only have 32 examples of the first five words. And therefore"}, {"start": 2377.28, "end": 2382.0400000000004, "text": " it's very easy to make this neural net fit only these 32 examples because we"}, {"start": 2382.0400000000004, "end": 2387.0800000000004, "text": " have 3,400 parameters and only 32 examples. So we're doing what's called"}, {"start": 2387.0800000000004, "end": 2392.52, "text": " overfitting a single batch of the data and getting a very low loss and good"}, {"start": 2392.52, "end": 2396.32, "text": " predictions. But that's just because we have so many parameters for so few"}, {"start": 2396.32, "end": 2401.32, "text": " examples. So it's easy to make this be very low. Now we're not able to achieve"}, {"start": 2401.32, "end": 2406.28, "text": " exactly zero. And the reason for that is we can for example look at low juts which"}, {"start": 2406.28, "end": 2413.44, "text": " are being predicted. And we can look at the max along the first dimension and"}, {"start": 2413.44, "end": 2419.48, "text": " in PyTorch max reports both the actual values that take on the maximum number"}, {"start": 2419.48, "end": 2424.2400000000002, "text": " but also the indices of ease. And you'll see that the indices are very close to"}, {"start": 2424.2400000000002, "end": 2430.52, "text": " the labels. But in some cases they differ. For example in this very first example"}, {"start": 2430.52, "end": 2436.24, "text": " the predicted index is 19 but the label is 5. And we're not able to make"}, {"start": 2436.24, "end": 2441.92, "text": " loss be zero. And fundamentally that's because here the very first or the"}, {"start": 2441.92, "end": 2446.08, "text": " 0th index is the example where dot dot dot is supposed to predict E. But you"}, {"start": 2446.08, "end": 2450.2, "text": " see how dot dot dot is also supposed to predict and O. And dot dot dot is also"}, {"start": 2450.2, "end": 2456.32, "text": " supposed to predict in the eye and then S as well. And so basically E O A or S are"}, {"start": 2456.32, "end": 2460.76, "text": " all possible outcomes in a training set for the exact same input. So we're not"}, {"start": 2460.76, "end": 2466.84, "text": " able to completely overfit and and make the last big exactly zero. But we're"}, {"start": 2466.84, "end": 2471.8, "text": " getting very close in the cases where there's a unique input for a unique"}, {"start": 2471.8, "end": 2475.92, "text": " output. In those cases we do what's called overfit and we basically get the"}, {"start": 2475.92, "end": 2482.0800000000004, "text": " exact same and the exact correct result. So now all we have to do is we just need"}, {"start": 2482.0800000000004, "end": 2484.6400000000003, "text": " to make sure that we read in the full data set and optimize the neural"}, {"start": 2484.64, "end": 2489.92, "text": " line. Okay so let's swing back up where we created the data set and we see that"}, {"start": 2489.92, "end": 2494.56, "text": " here we only use the first five words. So let me now erase this and let me erase"}, {"start": 2494.56, "end": 2498.7599999999998, "text": " the print statements otherwise be be printing way too much. And so when we"}, {"start": 2498.7599999999998, "end": 2503.8399999999997, "text": " process the full data set of all the words we now had 228,000 examples instead"}, {"start": 2503.8399999999997, "end": 2509.6, "text": " of just 32. So let's now scroll back down to this as much larger. We"}, {"start": 2509.6, "end": 2514.08, "text": " initialize the weights the same number of parameters they all require gradients. And"}, {"start": 2514.08, "end": 2519.56, "text": " then let's push this print our lost item to be here and let's just see how the"}, {"start": 2519.56, "end": 2526.16, "text": " optimization goes if we run this. Okay so we started with a fairly high loss"}, {"start": 2526.16, "end": 2533.12, "text": " and then as we're optimizing the loss is coming down. But you'll notice that it"}, {"start": 2533.12, "end": 2536.8399999999997, "text": " takes quite a bit of time for every single iteration. So let's actually"}, {"start": 2536.84, "end": 2539.8, "text": " address that because we're doing way too much work forwarding and"}, {"start": 2539.8, "end": 2544.6800000000003, "text": " backwarding 220,000 examples. In practice what people usually do is they"}, {"start": 2544.6800000000003, "end": 2550.36, "text": " perform forward and backward pass an update on many batches of the data. So what"}, {"start": 2550.36, "end": 2554.2400000000002, "text": " we will want to do is we want to randomly select some portion of the data set and"}, {"start": 2554.2400000000002, "end": 2557.96, "text": " that's a mini batch and then only forward backward and update on that little"}, {"start": 2557.96, "end": 2563.36, "text": " mini batch. And then we erase on those mini batches. So in PyTorch we can for"}, {"start": 2563.36, "end": 2568.08, "text": " example use tors.randent. We can generate numbers between 0 and 5 and make"}, {"start": 2568.08, "end": 2578.6, "text": " 32 of them. I believe the size has to be a tuple in PyTorch. So we can have a"}, {"start": 2578.6, "end": 2584.96, "text": " tuple 32 of numbers between 0 and 5. But actually we want x. shape of 0 here. And"}, {"start": 2584.96, "end": 2591.48, "text": " so this creates integers that index into our data set and there's 32. So if"}, {"start": 2591.48, "end": 2597.8, "text": " our mini batch size is 32 then we can come here and we can first do mini batch"}, {"start": 2597.8, "end": 2606.52, "text": " construct. So integers that we want to optimize in this single iteration are in"}, {"start": 2606.52, "end": 2614.6, "text": " the Ix and then we want to index into x with Ix to only grab those rows. So we're"}, {"start": 2614.6, "end": 2619.92, "text": " only getting 32 rows of x and therefore embeddings will again be 32 by 3 by 2."}, {"start": 2619.92, "end": 2626.28, "text": " Not 200,000 by 3 by 2. And then this Ix has to be used not just to index into"}, {"start": 2626.28, "end": 2633.2000000000003, "text": " x but also to index into y. And now this should be mini batches and this should"}, {"start": 2633.2000000000003, "end": 2639.6800000000003, "text": " be much much faster. So okay so it's instant almost. So this way we can run many"}, {"start": 2639.6800000000003, "end": 2645.96, "text": " many examples, nearly instantly and decrease the loss much much faster. Now"}, {"start": 2645.96, "end": 2649.88, "text": " because we're only doing with many batches the quality of our gradient is lower."}, {"start": 2649.88, "end": 2655.04, "text": " So the direction is not as reliable. It's not the actual gradient direction. But"}, {"start": 2655.04, "end": 2659.28, "text": " the gradient direction is good enough even when it's estimating on only 32"}, {"start": 2659.28, "end": 2664.68, "text": " examples that it is useful. And so it's much better to have an approximate"}, {"start": 2664.68, "end": 2668.92, "text": " gradient and just make more steps than it is to evaluate the exact gradient and"}, {"start": 2668.92, "end": 2675.08, "text": " take fewer steps. So that's why in practice this works quite well. So let's now"}, {"start": 2675.08, "end": 2683.48, "text": " continue the optimization. Let me take out this lost item from here and place it"}, {"start": 2683.48, "end": 2690.92, "text": " over here at the end. Okay so we're hovering around 2.5 or so. However this is"}, {"start": 2690.92, "end": 2696.7999999999997, "text": " only the loss for that mini batch. So let's actually evaluate the loss here for"}, {"start": 2696.7999999999997, "end": 2702.7599999999998, "text": " all of x and for all of y. Just so we have a full sense of exactly how well the"}, {"start": 2702.76, "end": 2707.7200000000003, "text": " model is doing right now. So right now we're at about 2.7 on the entire"}, {"start": 2707.7200000000003, "end": 2716.0, "text": " training set. So let's run the optimization for a while. Okay we're at 2.6, 2.5,"}, {"start": 2716.0, "end": 2726.32, "text": " 7, 2.5, 3. Okay so one issue of course is we don't know if we're stepping too"}, {"start": 2726.32, "end": 2732.84, "text": " slow or too fast. So this point one I just guessed it. So one question is how do"}, {"start": 2732.84, "end": 2737.56, "text": " you determine this learning rate? And how do we gain confidence that we're"}, {"start": 2737.56, "end": 2742.0800000000004, "text": " stepping in the right sort of speed? So I'll show you one way to determine a"}, {"start": 2742.0800000000004, "end": 2748.0800000000004, "text": " reasonable learning rate. It works as follows. Let's reset our parameters to the"}, {"start": 2748.08, "end": 2757.08, "text": " initial settings. And now let's print an every step. But let's only do 10 steps"}, {"start": 2757.08, "end": 2763.08, "text": " or so or maybe maybe 100 steps. We want to find like a very reasonable set"}, {"start": 2763.08, "end": 2770.4, "text": " the search range if you will. So for example this is like very low. Then we see"}, {"start": 2770.4, "end": 2775.0, "text": " that the loss is barely decreasing. So that's not that's like too low basically."}, {"start": 2775.0, "end": 2780.92, "text": " So let's try this one. Okay so we're decreasing the loss but like not very"}, {"start": 2780.92, "end": 2786.52, "text": " quickly. So that's a pretty good low range. Now let's reset it again. And now let's"}, {"start": 2786.52, "end": 2790.76, "text": " try to find the place at which the loss kind of explodes. So maybe at negative"}, {"start": 2790.76, "end": 2797.0, "text": " one. Okay we see that we're minimizing the loss but you see how it's kind of"}, {"start": 2797.0, "end": 2801.24, "text": " unstable. It goes up and down quite a bit. So negative one is probably like a"}, {"start": 2801.24, "end": 2808.6, "text": " fast learning rate. Let's try negative 10. Okay so this isn't optimizing. This is"}, {"start": 2808.6, "end": 2812.0, "text": " not working very well. So negative 10 is way too big. Negative one was already"}, {"start": 2812.0, "end": 2819.7599999999998, "text": " kind of big. So therefore negative one was like somewhat reasonable if I reset."}, {"start": 2819.7599999999998, "end": 2824.16, "text": " So I'm thinking that the right learning rate is somewhere between negative"}, {"start": 2824.16, "end": 2831.04, "text": " 0.001 and negative one. So the way we can do this here is we can use torque"}, {"start": 2831.04, "end": 2836.08, "text": " shut line space. And we want to basically do something like this between 0 and"}, {"start": 2836.08, "end": 2842.6, "text": " one but a number of steps is one more parameter that's required. Let's do a"}, {"start": 2842.6, "end": 2850.68, "text": " thousand steps. This creates 1000 numbers between 0.001 and 1. But it doesn't"}, {"start": 2850.68, "end": 2854.04, "text": " really make sense to step between these linearly. So instead let me create"}, {"start": 2854.04, "end": 2860.16, "text": " learning rate exponent. And instead of 0.001 this will be a negative three and"}, {"start": 2860.16, "end": 2864.6, "text": " this will be a zero. And then the actual errors that we want to search over are"}, {"start": 2864.6, "end": 2869.8399999999997, "text": " going to be 10 to the power of LRE. So now what we're doing is we're stepping"}, {"start": 2869.8399999999997, "end": 2874.7999999999997, "text": " linearly between the exponents of these learning rates. This is 0.001 and this is"}, {"start": 2874.7999999999997, "end": 2880.04, "text": " 1 because 10 to the power of 0 is 1. And therefore we are spaced"}, {"start": 2880.04, "end": 2885.24, "text": " exponentially in this interval. So these are the candidate learning rates that we"}, {"start": 2885.24, "end": 2891.9199999999996, "text": " want to sort of like search over roughly. So now what we're going to do is here we"}, {"start": 2891.9199999999996, "end": 2895.9199999999996, "text": " are going to run the optimization for 1000 steps. And instead of using a fixed"}, {"start": 2895.9199999999996, "end": 2903.0, "text": " number we are going to use learning rate indexing into here lRs of i and make"}, {"start": 2903.0, "end": 2909.8799999999997, "text": " this i. So basically let me reset this to be again starting from random."}, {"start": 2909.88, "end": 2918.2000000000003, "text": " Creating these learning rates between negative 0.001 and 1 but exponentially"}, {"start": 2918.2000000000003, "end": 2923.6800000000003, "text": " stepped. And here what we're doing is we're iterating a thousand times. We're"}, {"start": 2923.6800000000003, "end": 2928.56, "text": " going to use the learning rate that's in the beginning very very low. In the"}, {"start": 2928.56, "end": 2934.0, "text": " beginning it's going to be 0.001 but by the end it's going to be 1. And then"}, {"start": 2934.0, "end": 2938.56, "text": " we're going to step with that learning rate. And now what we want to do is we"}, {"start": 2938.56, "end": 2946.92, "text": " want to keep track of the learning rates that we used. And we want to look at the"}, {"start": 2946.92, "end": 2957.96, "text": " losses that resulted. And so here let me track stats. So lRi.append.plr and"}, {"start": 2957.96, "end": 2970.92, "text": " loss.append.loss.item. Okay so again reset everything and then run. And so"}, {"start": 2970.92, "end": 2973.52, "text": " basically we started with a very low learning rate and we went all the way up"}, {"start": 2973.52, "end": 2978.0, "text": " to learning rate of negative 1. And now what we can do is we can pedal to that"}, {"start": 2978.0, "end": 2982.96, "text": " plot and we can plot the two. So we can plot the learning rates on the x-axis"}, {"start": 2982.96, "end": 2988.16, "text": " and the losses we saw on the y-axis. And often you're going to find that your"}, {"start": 2988.16, "end": 2992.92, "text": " plot looks something like this. Where in the beginning you have very low"}, {"start": 2992.92, "end": 2997.88, "text": " learning rates. We basically anything barely anything happened. Then we got"}, {"start": 2997.88, "end": 3003.28, "text": " to like a nice spot here. And then as we increased the learning rate enough we"}, {"start": 3003.28, "end": 3007.32, "text": " basically started to be kind of unstable here. So a good learning rate turns"}, {"start": 3007.32, "end": 3015.48, "text": " out to be somewhere around here. And because we have lRi here we actually may"}, {"start": 3015.48, "end": 3023.52, "text": " want to do not lR not the learning rate but the exponent. So that would be the"}, {"start": 3023.52, "end": 3028.6800000000003, "text": " lRi at i is maybe what we want to log. So let me reset this and redo that"}, {"start": 3028.6800000000003, "end": 3036.1600000000003, "text": " calculation. But now on the x-axis we have the exponent of the learning rate. And so"}, {"start": 3036.16, "end": 3039.04, "text": " we can see the exponent of the learning rate that is good to use. It would be"}, {"start": 3039.04, "end": 3042.7999999999997, "text": " sort of like roughly in the valley here. Because here the learning rates are just"}, {"start": 3042.7999999999997, "end": 3046.68, "text": " way too low. And then here we expect relatively good learning rate somewhere"}, {"start": 3046.68, "end": 3051.0, "text": " here. And then here things are starting to explode. So somewhere around negative"}, {"start": 3051.0, "end": 3055.24, "text": " 1 x the exponent of the learning rate is a pretty good setting. And 10 to the"}, {"start": 3055.24, "end": 3061.64, "text": " negative 1 is 0.1. So 0.1 is actually a fairly good learning rate around here."}, {"start": 3061.64, "end": 3066.52, "text": " And that's what we had in the initial setting. But that's roughly how you"}, {"start": 3066.52, "end": 3073.3199999999997, "text": " would determine it. And so here now we can take out the tracking of these. And we"}, {"start": 3073.3199999999997, "end": 3079.2799999999997, "text": " can just simply set a lR to be 10 to the negative 1 or basically otherwise 0.1"}, {"start": 3079.2799999999997, "end": 3082.8799999999997, "text": " as it was before. And now we have some confidence that this is actually a fairly"}, {"start": 3082.8799999999997, "end": 3087.0, "text": " good learning rate. And so now what we can do is we can crank up the iterations."}, {"start": 3087.0, "end": 3095.0, "text": " We can reset our optimization. And we can run for a pretty long time using this"}, {"start": 3095.0, "end": 3100.52, "text": " learning rate. Oops. And we don't want to print. It's way too much printing. So"}, {"start": 3100.52, "end": 3112.12, "text": " let me again reset and run 10,000 steps. Okay, so we're 0.2 2.48 roughly. Let's"}, {"start": 3112.12, "end": 3122.7999999999997, "text": " run another 10,000 steps. 2.46. And now let's do one learning rate decay. What"}, {"start": 3122.7999999999997, "end": 3125.88, "text": " this means is we're going to take our learning rate and we're going to 10x"}, {"start": 3125.88, "end": 3130.4, "text": " lower it. And so over at the late stages of training potentially. And we may"}, {"start": 3130.4, "end": 3136.7599999999998, "text": " want to go a bit slower. Let's do one more actually at point one just to see if"}, {"start": 3136.76, "end": 3142.1600000000003, "text": " we're making an indent here. Okay, we're still making dent. And by the way the"}, {"start": 3142.1600000000003, "end": 3147.36, "text": " bi-gram loss that we achieved last video was 2.45. So we've already surpassed the"}, {"start": 3147.36, "end": 3151.44, "text": " bi-gram level. And once I get a sense that this is actually kind of starting to"}, {"start": 3151.44, "end": 3156.2000000000003, "text": " plateau off, people like to do as I mentioned this learning rate decay. So let's"}, {"start": 3156.2000000000003, "end": 3165.8, "text": " try to decay the loss, the learning rate I mean. And we achieve it about 2.3 now."}, {"start": 3165.8, "end": 3170.36, "text": " Obviously this is janky and not exactly how you train it in production. But this"}, {"start": 3170.36, "end": 3174.0800000000004, "text": " is roughly what you're going through. You first find a decent learning rate using"}, {"start": 3174.0800000000004, "end": 3177.5600000000004, "text": " the approach that I showed you. Then you start with that learning rate and you"}, {"start": 3177.5600000000004, "end": 3181.2000000000003, "text": " train for a while. And then at the end people like to do a learning rate decay"}, {"start": 3181.2000000000003, "end": 3184.7200000000003, "text": " where you decay the learning rate by say a factor of 10 and you do a few more"}, {"start": 3184.7200000000003, "end": 3189.44, "text": " steps. And then you get a trained network roughly speaking. So we've achieved"}, {"start": 3189.44, "end": 3194.2000000000003, "text": " 2.3 and dramatically improved on the bi-gram language model using this"}, {"start": 3194.2, "end": 3200.56, "text": " simple neural net as described here using these 3,400 parameters. Now there's"}, {"start": 3200.56, "end": 3204.48, "text": " something we have to be careful with. I said that we have a better model because"}, {"start": 3204.48, "end": 3209.68, "text": " we are achieving a lower loss 2.3 much lower than 2.45 with the bi-gram model"}, {"start": 3209.68, "end": 3216.56, "text": " previously. Now that's not exactly true. And the reason that's not true is that"}, {"start": 3216.56, "end": 3220.96, "text": " this is actually fairly small model. But these models can get larger and larger"}, {"start": 3220.96, "end": 3224.88, "text": " if you keep adding neurons and parameters. So you can imagine that we don't"}, {"start": 3224.88, "end": 3228.84, "text": " potentially have a thousand parameters. We could have 10,000 or 100,000 or millions"}, {"start": 3228.84, "end": 3234.0, "text": " of parameters. And as the capacity of the neural network grows it becomes more"}, {"start": 3234.0, "end": 3238.56, "text": " and more capable of overfitting your training set. What that means is that the"}, {"start": 3238.56, "end": 3242.8, "text": " loss on the training set on the data that you're training on will become very"}, {"start": 3242.8, "end": 3247.36, "text": " very low as low as zero. But all that the model is doing is memorizing your"}, {"start": 3247.36, "end": 3251.2400000000002, "text": " training set for bigum. So if you take that model and it looks like it's working"}, {"start": 3251.2400000000002, "end": 3255.1600000000003, "text": " really well but you try to sample from it you will basically only get examples"}, {"start": 3255.1600000000003, "end": 3259.76, "text": " exactly as they are in the training set. You won't get any new data. In addition"}, {"start": 3259.76, "end": 3264.36, "text": " to that if you try to evaluate the loss on some withheld names or other words"}, {"start": 3264.36, "end": 3268.88, "text": " you will actually see that the loss on those can be very high. As a"}, {"start": 3268.88, "end": 3273.1600000000003, "text": " basically it's not a good model. So the standard in the field it is to split up"}, {"start": 3273.16, "end": 3277.8799999999997, "text": " your data set into three splits as we call them. We have the training split, the"}, {"start": 3277.8799999999997, "end": 3286.3599999999997, "text": " dev split or the validation split and the test split. So training split test or"}, {"start": 3286.3599999999997, "end": 3293.44, "text": " sorry dev or validation split and test split. And typically this would be"}, {"start": 3293.44, "end": 3298.92, "text": " say 80% of your data set. This could be 10% and this 10% roughly. So you have"}, {"start": 3298.92, "end": 3303.96, "text": " these three splits of the data. Now these 80% of your trainings of the"}, {"start": 3303.96, "end": 3307.56, "text": " data set, the training set is used to optimize the parameters of the model"}, {"start": 3307.56, "end": 3313.2000000000003, "text": " just like we're doing here using gradient descent. These 10% of the examples"}, {"start": 3313.2000000000003, "end": 3317.36, "text": " the dev or validation split they're used for development over all the hyper"}, {"start": 3317.36, "end": 3321.7200000000003, "text": " parameters of your model. So hyper primers are for example the size of this"}, {"start": 3321.7200000000003, "end": 3326.04, "text": " hidden layer, the size of the embedding. So this is a hundred or a two for us"}, {"start": 3326.04, "end": 3330.24, "text": " or we could try different things. The strength of the realization which we"}, {"start": 3330.24, "end": 3333.96, "text": " aren't using yet so far. So there's lots of different hyper primers and"}, {"start": 3333.96, "end": 3337.44, "text": " settings that go into defining in your lot. And you can try many different"}, {"start": 3337.44, "end": 3342.48, "text": " variations of them and see whichever one works best on your validation split."}, {"start": 3342.48, "end": 3347.6, "text": " So this is used to train the primers. This is used to train the hyper"}, {"start": 3347.6, "end": 3352.92, "text": " primers and test split is used to evaluate basically the performance of the"}, {"start": 3352.92, "end": 3356.76, "text": " model at the end. So we're only evaluating the loss on the test split very"}, {"start": 3356.76, "end": 3361.28, "text": " very sparingly and very few times because every single time you evaluate your"}, {"start": 3361.28, "end": 3366.04, "text": " test loss and you learn something from it. You are basically starting to also"}, {"start": 3366.04, "end": 3372.0, "text": " train on the test split. So you are only allowed to test the loss on the test set"}, {"start": 3372.0, "end": 3377.7200000000003, "text": " very very few times. Otherwise you risk overfitting to it as well as you"}, {"start": 3377.7200000000003, "end": 3382.44, "text": " experiment on your model. So let's also split up our training data into"}, {"start": 3382.44, "end": 3387.64, "text": " train, dev and test. And then we are going to train on train and only evaluate"}, {"start": 3387.64, "end": 3393.0, "text": " on test very very sparingly. Okay so here we go. Here is where we took all the"}, {"start": 3393.0, "end": 3398.08, "text": " words and put them into x and y tensors. So instead let me create a new cell"}, {"start": 3398.08, "end": 3402.08, "text": " here and let me just copy paste some code here because I don't think it's that"}, {"start": 3402.08, "end": 3408.56, "text": " complex but we're gonna try to save a little bit of time. I'm converting this"}, {"start": 3408.56, "end": 3413.2, "text": " to be a function now and this function takes some list of words and builds the"}, {"start": 3413.2, "end": 3419.4, "text": " erase x and y for those words only. And then here I am shuffling up all the"}, {"start": 3419.4, "end": 3423.72, "text": " words. So these are the input words that we get. We are randomly shuffling them"}, {"start": 3423.72, "end": 3431.24, "text": " all up. And then we're going to set n1 to be the number of examples that is"}, {"start": 3431.24, "end": 3437.12, "text": " 80% of the words and n2 to be 90% of the way of the words. So basically if"}, {"start": 3437.12, "end": 3445.92, "text": " length of words is 30,000 and one is also I should probably run this. n1 is 25,000"}, {"start": 3445.92, "end": 3452.0, "text": " and n2 is 28,000. And so here we see that I'm calling build data set to build"}, {"start": 3452.0, "end": 3457.24, "text": " the training set x and y by indexing into up to n1. So we're going to have"}, {"start": 3457.24, "end": 3465.56, "text": " only 25,000 training words. And then we're going to have roughly n2 minus n1"}, {"start": 3465.56, "end": 3474.12, "text": " 3,000 validation examples or dev examples. And we're going to have a length of"}, {"start": 3474.12, "end": 3485.2799999999997, "text": " words basically minus n2 or 3,200 and 4 examples here for the test set. So now we"}, {"start": 3485.2799999999997, "end": 3494.6, "text": " have x is and y's for all those three splits. Oh yeah I'm printing their size"}, {"start": 3494.6, "end": 3500.8399999999997, "text": " here inside it function as well. But here we don't have words but these are"}, {"start": 3500.8399999999997, "end": 3506.2799999999997, "text": " already the individual examples made from those words. So let's now scroll down"}, {"start": 3506.2799999999997, "end": 3514.12, "text": " here. And the data set now for training is more like this. And then when we"}, {"start": 3514.12, "end": 3520.68, "text": " reset the network, when we're training, we're only going to be training"}, {"start": 3520.68, "end": 3531.3199999999997, "text": " using x train x train and y train. So that's the only thing we're training on."}, {"start": 3537.44, "end": 3546.2, "text": " Let's see where we are on a single batch. Let's now train maybe a few more steps."}, {"start": 3546.2, "end": 3551.3599999999997, "text": " Training on neural hours can take a while. Usually you don't do it in line. You"}, {"start": 3551.3599999999997, "end": 3555.52, "text": " launch a bunch of jobs and you wait for them to finish. You can take multiple"}, {"start": 3555.52, "end": 3562.24, "text": " days and so on. Luckily this is a very small network. Okay so the loss is"}, {"start": 3562.24, "end": 3567.8799999999997, "text": " pretty good. Oh we accidentally used our learning rate. That is way too low. So"}, {"start": 3567.8799999999997, "end": 3575.7999999999997, "text": " let me actually come back. We used the the K learning rate of 0.01. So this will"}, {"start": 3575.8, "end": 3582.7200000000003, "text": " train faster. And then here when we evaluate, let's use the dev set here. X"}, {"start": 3582.7200000000003, "end": 3590.2400000000002, "text": " dev and Y dev to evaluate the loss. Okay. And let's not decay the learning"}, {"start": 3590.2400000000002, "end": 3598.2400000000002, "text": " rate and only do say 10,000 examples. And let's evaluate the dev loss once"}, {"start": 3598.2400000000002, "end": 3602.6800000000003, "text": " here. Okay so we're getting about 2.3 on dev. And so the neural network running"}, {"start": 3602.68, "end": 3607.2999999999997, "text": " was training did not see these dev examples. It hasn't optimized on them. And"}, {"start": 3607.2999999999997, "end": 3611.7999999999997, "text": " yet when we evaluate the loss on these dev, we actually get a pretty decent loss."}, {"start": 3611.7999999999997, "end": 3621.08, "text": " And so we can also look at what the loss is on all of training set. Oops. And so"}, {"start": 3621.08, "end": 3625.3999999999996, "text": " we see that the training and the dev loss are about equal. So we're not overfitting."}, {"start": 3625.3999999999996, "end": 3631.48, "text": " This model is not powerful enough to just be purely memorizing the data. And so"}, {"start": 3631.48, "end": 3636.04, "text": " far we are what's called underfitting because the training loss and the dev or"}, {"start": 3636.04, "end": 3640.44, "text": " test losses are roughly equal. So what that typically means is that our network"}, {"start": 3640.44, "end": 3645.72, "text": " is very tiny, very small. And we expect to make performance improvements by"}, {"start": 3645.72, "end": 3649.52, "text": " scaling up the size of this neural net. So let's do that now. So let's come over"}, {"start": 3649.52, "end": 3654.04, "text": " here. And let's increase the size within your net. The easiest way to do this is"}, {"start": 3654.04, "end": 3657.36, "text": " we can come here to the hidden layer, which currently is 100 neurons. And let's"}, {"start": 3657.36, "end": 3663.4, "text": " just bump this up. So let's do 300 neurons. And then this is also 300 biases. And"}, {"start": 3663.4, "end": 3670.44, "text": " here we have 300 inputs into the final layer. So let's initialize our neural net."}, {"start": 3670.44, "end": 3676.4, "text": " We now have 10,000, 10,000 parameters instead of 3,000 parameters. And then"}, {"start": 3676.4, "end": 3681.92, "text": " we're not using this. And then here what I'd like to do is I'd like to actually keep track of"}, {"start": 3681.92, "end": 3692.2000000000003, "text": " that. Okay, let's just do this. Let's keep stats again. And here when we're keeping"}, {"start": 3692.2000000000003, "end": 3699.56, "text": " track of the loss, let's just also keep track of the steps. And let's just have"}, {"start": 3699.56, "end": 3708.96, "text": " eye here. And let's train on 30,000 or rather say, okay, let's try 30,000. And we are at"}, {"start": 3708.96, "end": 3719.36, "text": " 0.1. And we should alter on this, not as near a lot. And then here basically I want to"}, {"start": 3719.36, "end": 3732.64, "text": " plt dot plot the steps and things to the loss. So these are the x's and the y's. And this"}, {"start": 3732.64, "end": 3737.88, "text": " is the last function and how it's being optimized. Now you see that there's quite a"}, {"start": 3737.88, "end": 3742.1600000000003, "text": " bit of thickness to this. And that's because we are optimizing over these mini batches."}, {"start": 3742.1600000000003, "end": 3747.92, "text": " And the mini batches create a little bit of noise in this. Where are we in the deficit?"}, {"start": 3747.92, "end": 3752.52, "text": " We are at 2.5. So we're still having to optimize this neural net very well. And that's"}, {"start": 3752.52, "end": 3757.92, "text": " probably because we make it bigger. It might take longer for this neural net to converge."}, {"start": 3757.92, "end": 3767.48, "text": " And so let's continue training. Yeah, let's just continue training. One possibility is"}, {"start": 3767.48, "end": 3773.08, "text": " that the batch size is solo that we just have way too much noise in the training. And"}, {"start": 3773.08, "end": 3777.52, "text": " we may want to increase the batch size so that we have a bit more correct gradient. And"}, {"start": 3777.52, "end": 3789.2, "text": " we're not thrashing too much. And we can actually like optimize more properly. Okay. This"}, {"start": 3789.2, "end": 3795.48, "text": " will now become meaningless because we've re-initialized these. So yeah, this looks not pleasing"}, {"start": 3795.48, "end": 3800.84, "text": " right now. But the problem is look at tiny improvement, but it's so hard to tell. Let's"}, {"start": 3800.84, "end": 3830.8, "text": " go again. 2.5.2. Let's try to decrease the learning rate by factor of 2. Okay, we're"}, {"start": 3830.8, "end": 3848.2000000000003, "text": " 2.3.2. Let's continue training. We basically expect to see a lower loss than what we had"}, {"start": 3848.2000000000003, "end": 3852.5600000000004, "text": " before because now we have a much, much bigger model. And we were underfitting. So we'd"}, {"start": 3852.5600000000004, "end": 3857.5600000000004, "text": " expect that increasing the size of the model should help the neural net. 2.3.2. Okay,"}, {"start": 3857.56, "end": 3861.92, "text": " so that's not happening too well. Now, one other concern is that even though we've made"}, {"start": 3861.92, "end": 3866.72, "text": " the 10H layer here or the hidden layer much, much bigger, it could be that the bottleneck"}, {"start": 3866.72, "end": 3871.04, "text": " of the network right now are these embeddings that are too dimensional. It can be that"}, {"start": 3871.04, "end": 3874.7999999999997, "text": " we're just cramming way too many characters into just two dimensions. And the neural net"}, {"start": 3874.7999999999997, "end": 3879.88, "text": " is not able to really use that space effectively. And that that is sort of like the bottleneck"}, {"start": 3879.88, "end": 3886.12, "text": " to our networks performance. Okay, 2.23. So just by decreasing the learning rate, I was able"}, {"start": 3886.12, "end": 3892.64, "text": " to make quite a bit of progress. Let's run this one more time. And then evaluate the"}, {"start": 3892.64, "end": 3899.52, "text": " training and the dev loss. Now, one more thing after training that I'd like to do is I'd"}, {"start": 3899.52, "end": 3908.44, "text": " like to visualize the embedding vectors for these characters before we scale up the embedding"}, {"start": 3908.44, "end": 3914.24, "text": " size from 2. Because we'd like to make this bottleneck potentially go away. But once"}, {"start": 3914.24, "end": 3918.8799999999997, "text": " I make this greater than two, we won't be able to visualize them. So here, okay, we're"}, {"start": 3918.8799999999997, "end": 3925.72, "text": " at 2.23 and 2.24. So we're not improving much more. And maybe the bottleneck now is the"}, {"start": 3925.72, "end": 3930.2799999999997, "text": " character embedding size, which is two. So here I have a bunch of code that will create"}, {"start": 3930.2799999999997, "end": 3935.9199999999996, "text": " a figure. And then we're going to visualize the embeddings that were trained by the neural"}, {"start": 3935.9199999999996, "end": 3941.04, "text": " net on these characters. Because right now the embedding size is just two. So we can visualize"}, {"start": 3941.04, "end": 3945.56, "text": " all the characters with the x and the y coordinates as the two embedding locations for each of"}, {"start": 3945.56, "end": 3951.92, "text": " these characters. And so here are the x coordinates and the y coordinates, which are the columns"}, {"start": 3951.92, "end": 3958.92, "text": " of c. And then for each one, I also include the text of the little character. So here,"}, {"start": 3958.92, "end": 3965.08, "text": " what we see is actually kind of interesting. The network has basically learned to separate"}, {"start": 3965.08, "end": 3972.08, "text": " out the characters and cluster them a little bit. So for example, you see how the vowels,"}, {"start": 3972.08, "end": 3975.2, "text": " A, E, I, O, U are clustered up here. So what that's telling us is that the neural net treats"}, {"start": 3975.2, "end": 3980.2799999999997, "text": " these is very similar, right? Because when they feed into the neural net, the embedding"}, {"start": 3980.2799999999997, "end": 3983.92, "text": " for all these characters is very similar. And so the neural net thinks that they're very"}, {"start": 3983.92, "end": 3990.7599999999998, "text": " similar and kind of like interchangeable. And that makes sense. Then the points that"}, {"start": 3990.76, "end": 3995.1200000000003, "text": " are like really far away are, for example, Q. Q is kind of treated as an exception. And"}, {"start": 3995.1200000000003, "end": 4000.5600000000004, "text": " Q has a very special embedding vector, so to speak. Similarly, dot, which is a special"}, {"start": 4000.5600000000004, "end": 4005.0800000000004, "text": " character is all the way out here. And a lot of the other letters are sort of like clustered"}, {"start": 4005.0800000000004, "end": 4009.32, "text": " up here. And so it's kind of interesting that there's a little bit of structure here"}, {"start": 4009.32, "end": 4016.28, "text": " after the training. And it's not definitely not random. And these embeddings make sense."}, {"start": 4016.28, "end": 4020.6000000000004, "text": " So we're now going to scale up the embedding size and won't be able to visualize it directly."}, {"start": 4020.6, "end": 4026.44, "text": " And we expect that because we're underpinning and we made this layer much bigger and did"}, {"start": 4026.44, "end": 4032.52, "text": " not sufficiently improve the loss, we're thinking that the constraint to better performance"}, {"start": 4032.52, "end": 4036.88, "text": " right now could be these embedding vectors. So let's make them bigger. Okay, so let's"}, {"start": 4036.88, "end": 4042.04, "text": " crawl up here. And now we don't have two dimensional embeddings. We are going to have, say,"}, {"start": 4042.04, "end": 4050.0, "text": " 10 dimensional embeddings for each word. Then this layer will receive three times 10."}, {"start": 4050.0, "end": 4057.68, "text": " So 30 inputs will go into the hidden layer. Let's also make the hidden layer a bit smaller."}, {"start": 4057.68, "end": 4062.72, "text": " So instead of 300, let's just do 200 neurons in that hidden layer. So now the total number"}, {"start": 4062.72, "end": 4068.92, "text": " of elements will be slightly bigger at 11,000. And then we here, we have to be a bit careful"}, {"start": 4068.92, "end": 4075.52, "text": " because, okay, the learning rate we set to point one. Here we are a hard code in six."}, {"start": 4075.52, "end": 4079.72, "text": " And obviously if you're working in production, you don't want to be hard coding magic numbers."}, {"start": 4079.72, "end": 4087.2799999999997, "text": " But instead of six, this should now be 30. And let's run for 50,000 iterations and let"}, {"start": 4087.2799999999997, "end": 4093.04, "text": " me split out the initialization here outside so that when we run this a multiple times"}, {"start": 4093.04, "end": 4101.8, "text": " is not going to wipe out our loss. In addition to that here, let's instead of logging"}, {"start": 4101.8, "end": 4110.52, "text": " in lost items, let's actually log the, let's do log 10, I believe that's a function of"}, {"start": 4110.52, "end": 4117.88, "text": " the loss. And I'll show you why in a second, let's optimize this. Basically, I'd like"}, {"start": 4117.88, "end": 4122.2, "text": " to plot the log loss instead of the loss because when you plot the loss, many times it can"}, {"start": 4122.2, "end": 4129.360000000001, "text": " have this hockey stick appearance and log squashes it in. So it just kind of looks nicer."}, {"start": 4129.36, "end": 4143.88, "text": " So the x-axis is step i and the y-axis will be the loss i. And then here this is 30. Ideally,"}, {"start": 4143.88, "end": 4152.92, "text": " we wouldn't be hard coding these. Because let's look at the loss. Okay, it's again very"}, {"start": 4152.92, "end": 4157.36, "text": " thick because the mini batch size is very small. But the total loss over the training set"}, {"start": 4157.36, "end": 4163.96, "text": " is 2.3 and the the test or the dev set is 2.3 as well. So so far so good. Let's try to"}, {"start": 4163.96, "end": 4175.639999999999, "text": " now decrease the learning rate by a factor of 10 and train for another 50,000 iterations."}, {"start": 4175.639999999999, "end": 4184.92, "text": " We'd hope that we would be able to beat 2.3. But again, we're just kind of like doing"}, {"start": 4184.92, "end": 4189.4400000000005, "text": " this very haphazardly. So I don't actually have confidence that our learning rate is set"}, {"start": 4189.4400000000005, "end": 4195.68, "text": " very well. That our learning rate decay, which we just do at random is set very well. And"}, {"start": 4195.68, "end": 4199.84, "text": " so the optimization here is kind of suspects to be honest. And this is not how you would"}, {"start": 4199.84, "end": 4204.32, "text": " do a typically production. In production, you would create parameters or hyper parameters"}, {"start": 4204.32, "end": 4207.92, "text": " out of all these settings. And then you would run lots of experiments and see whichever"}, {"start": 4207.92, "end": 4217.32, "text": " ones are working well for you. Okay, so we have 2.17 now and 2.2. Okay, so you see how"}, {"start": 4217.32, "end": 4223.72, "text": " the training and the validation performance are starting to slightly slowly depart. So"}, {"start": 4223.72, "end": 4229.28, "text": " maybe we're getting the sense that the neural net is getting good enough or that number"}, {"start": 4229.28, "end": 4235.76, "text": " parameters are large enough that we are slowly starting to overfit. Let's maybe run"}, {"start": 4235.76, "end": 4243.400000000001, "text": " one more iteration of this and see where we get. But yeah, basically you would be running"}, {"start": 4243.400000000001, "end": 4247.16, "text": " lots of experiments and then you are slowly scrutinizing whichever ones give you the best"}, {"start": 4247.16, "end": 4251.88, "text": " death performance. And then once you find all the hyper parameters that make your death"}, {"start": 4251.88, "end": 4256.280000000001, "text": " performance good, you take that model and you evaluate the test set performance a single"}, {"start": 4256.280000000001, "end": 4260.4800000000005, "text": " time. And that's the number that you report in your paper or wherever else you want to"}, {"start": 4260.48, "end": 4268.679999999999, "text": " talk about and brag about your model. So let's then rerun the plot and rerun the train"}, {"start": 4268.679999999999, "end": 4274.959999999999, "text": " and death. And because we're getting lower loss now, it is the case that the embedding"}, {"start": 4274.959999999999, "end": 4283.04, "text": " size of these was holding us back very likely. Okay, so 2.16 to 0.19 is what we're roughly"}, {"start": 4283.04, "end": 4288.44, "text": " getting. So there's many ways to go from many ways to go from here. We can continue"}, {"start": 4288.44, "end": 4293.04, "text": " tuning the optimization. We can continue for example playing with the size of the neural"}, {"start": 4293.04, "end": 4298.799999999999, "text": " net or we can increase the number of words or characters in our case that we are taking"}, {"start": 4298.799999999999, "end": 4302.44, "text": " as an input. So instead of just three characters, we could be taking more characters than as"}, {"start": 4302.44, "end": 4308.32, "text": " an input. And that could further improve the loss. Okay, so I changed the code slightly."}, {"start": 4308.32, "end": 4313.759999999999, "text": " So we have here 200,000 steps of the optimization. And in the first 100,000, we're using a learning"}, {"start": 4313.76, "end": 4319.12, "text": " rate of 0.1. And then in the next 100,000, we're using a learning rate of 0.01. This is the"}, {"start": 4319.12, "end": 4324.4800000000005, "text": " loss that I achieve. And these are the performance on the training and validation loss. And in"}, {"start": 4324.4800000000005, "end": 4328.04, "text": " particular, the best validation loss I've been able to obtain in the last 30 minutes or"}, {"start": 4328.04, "end": 4334.6, "text": " so is 2.17. So now I invite you to beat this number. And you have quite a few knobs available"}, {"start": 4334.6, "end": 4339.280000000001, "text": " to you to I think surpass this number. So number one, you can of course change the number"}, {"start": 4339.280000000001, "end": 4343.72, "text": " of neurons in the hidden layer of this model. You can change the dimensionality of the embedding"}, {"start": 4343.72, "end": 4350.08, "text": " lookup table. You can change the number of characters that are feeding in as an input, as"}, {"start": 4350.08, "end": 4355.360000000001, "text": " the context into this model. And then of course, you can change the details of the optimization."}, {"start": 4355.360000000001, "end": 4359.84, "text": " How long are we running? What is the learning rate? How does it change over time? How does"}, {"start": 4359.84, "end": 4364.400000000001, "text": " it decay? You can change the batch size and you may be able to actually achieve a much"}, {"start": 4364.400000000001, "end": 4369.52, "text": " better convergence speed in terms of how many seconds or minutes it takes to train the"}, {"start": 4369.52, "end": 4376.96, "text": " model and get your result in terms of really good loss. And then of course, I actually"}, {"start": 4376.96, "end": 4380.96, "text": " invite you to read this paper. It is 19 pages, but at this point you should actually be"}, {"start": 4380.96, "end": 4387.120000000001, "text": " able to read a good chunk of this paper and understand pretty good chunks of it. And"}, {"start": 4387.120000000001, "end": 4391.6, "text": " this paper also has quite a few ideas for improvements that you can play with. So all"}, {"start": 4391.6, "end": 4395.68, "text": " of those are not available to you and you should be able to beat this number. I'm leaving"}, {"start": 4395.68, "end": 4404.68, "text": " that as an exercise to the reader and that's it for now and I'll see you next time."}, {"start": 4404.68, "end": 4409.16, "text": " Before we wrap up, I also wanted to show how you would sample from the model. So we're"}, {"start": 4409.16, "end": 4415.68, "text": " going to generate 20 samples. At first we begin with all dots. So that's the context."}, {"start": 4415.68, "end": 4423.84, "text": " And then until we generate the zeroed character again, we're going to embed the current context"}, {"start": 4423.84, "end": 4429.8, "text": " using the embedding table C. Now usually here, the first dimension was the size of the"}, {"start": 4429.8, "end": 4433.6, "text": " training set, but here we're only working with a single example that we're generating."}, {"start": 4433.6, "end": 4441.28, "text": " So this is just the mission one, just for simplicity. And so this embedding then gets projected"}, {"start": 4441.28, "end": 4446.2, "text": " into the state. You get the logits. Now we calculate the probabilities. For that, you"}, {"start": 4446.2, "end": 4452.08, "text": " can use f dot softmax of logits. And that just basically exponentially is the logits"}, {"start": 4452.08, "end": 4456.8, "text": " and makes them sum to one. And similar to cross entropy, it is careful that there's"}, {"start": 4456.8, "end": 4462.16, "text": " no overflows. Once we have the probabilities, we sample from them using torshot multinomial"}, {"start": 4462.16, "end": 4467.32, "text": " to get our next index. And then we shift the context window to append the index and record"}, {"start": 4467.32, "end": 4473.96, "text": " it. And then we can just decode all the integers to strings and print them out. And so these"}, {"start": 4473.96, "end": 4478.28, "text": " are some example samples. And you can see that the model now works much better. So the"}, {"start": 4478.28, "end": 4488.639999999999, "text": " words here are much more word like or name like. So we have things like ham, joes, lele,"}, {"start": 4488.639999999999, "end": 4492.88, "text": " it started to sound a little bit more name like. So we're definitely making progress, but"}, {"start": 4492.88, "end": 4497.599999999999, "text": " we can still improve on this model quite a lot. Okay, sorry, there's some bonus content."}, {"start": 4497.599999999999, "end": 4502.24, "text": " I wanted to mention that I want to make these notebooks more accessible. And so I don't"}, {"start": 4502.24, "end": 4506.04, "text": " want you to have to like install your bare notebooks and torture everything else. So I"}, {"start": 4506.04, "end": 4511.8, "text": " will be sharing a link to Google collab. And the Google collab will look like a notebook"}, {"start": 4511.8, "end": 4516.44, "text": " in your browser. And you can just go to URL and you'll be able to execute all of the"}, {"start": 4516.44, "end": 4521.72, "text": " code that you saw in the Google collab. And so this is me executing the code in this"}, {"start": 4521.72, "end": 4525.96, "text": " lecture. And I shortened it a little bit. But basically you're able to train the exact"}, {"start": 4525.96, "end": 4530.32, "text": " same network and then plot and sample from the model. And everything is ready for you"}, {"start": 4530.32, "end": 4535.88, "text": " to like tinker with the numbers right there in your browser. No installation necessary."}, {"start": 4535.88, "end": 4538.8, "text": " So I just wanted to point that out and the link to this will be in the video description."}] |
Neural Networks: Zero to Hero | https://www.youtube.com/watch?v=PaCmpygFfXo | The spelled-out intro to language modeling: building makemore | "We implement a bigram character-level language model, which we will further complexify in followup (...TRUNCATED) | " Hi everyone, hope you're well. And next up what I'd like to do is I'd like to build out Makemore. (...TRUNCATED) | "[{\"start\": 0.0, \"end\": 2.0, \"text\": \" Hi everyone, hope you're well.\"}, {\"start\": 2.0, \"(...TRUNCATED) |
Neural Networks: Zero to Hero | https://www.youtube.com/watch?v=P6sfmUTpUmc | Building makemore Part 3: Activations & Gradients, BatchNorm | "We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of th(...TRUNCATED) | " Hi everyone. Today we are continuing our implementation of Makemore. Now in the last lecture we im(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 5.6000000000000005, \"text\": \" Hi everyone. Today we are continuing ou(...TRUNCATED) |
Neural Networks: Zero to Hero | https://www.youtube.com/watch?v=VMj-3S1tku0 | The spelled-out intro to neural networks and backpropagation: building micrograd | "This is the most step-by-step spelled-out explanation of backpropagation and training of neural net(...TRUNCATED) | " Hello, my name is Andre and I've been training deep neural networks for a bit more than a decade a(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 5.86, \"text\": \" Hello, my name is Andre and I've been training deep n(...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=dayDTsaM1Gc | Drive My Car | "#miercolesdecine #drivemycar #mubi \nImagínense que nuestro patrocinador de #Miércolesdecine que(...TRUNCATED) | " Buenas, les cuento una historia hoy en mi el cole de cine después de 4 años tenemos un patrocina(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 20.72, \"text\": \" Buenas, les cuento una historia hoy en mi el cole de(...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=MZV9fQB0hnM | Feria de Manizales | "#podcastdianauribe #dianauribefm #feriademanizales \nEsta vez el turno es para una de las festivid(...TRUNCATED) | " buenas siguiendo las tradiciones de las ferias y fiestas en las que estamos montados encontrando e(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 6.5, \"text\": \" buenas siguiendo las tradiciones de las ferias y fiest(...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=CUDzFxG6cJ4 | Hable con ella | "#miercolesdecine #almodovar \nNuestro patrocinador @MUBI nos regaló 30 días gratis de cine en su (...TRUNCATED) | " Buenas, les conté que muy bien nos está patos de ineados que tenemos una lianza con muy, les con(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 18.88, \"text\": \" Buenas, les cont\\u00e9 que muy bien nos est\\u00e1 (...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=1ZDKBghlo-g | Festival y Carnaval de la Subienda de Honda | "#podcastdianauribe #honda \nEsta vez en nuestra serie de Ferias y Fiestas de Colombia nuestro dest(...TRUNCATED) | " ¡Buenas! Hoy nos vamos a meter con un carnaval para el cual tenemos que tocar toda la arteria fun(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 15.14, \"text\": \" \\u00a1Buenas! Hoy nos vamos a meter con un carnaval(...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=kcP8uzJ-OHc | Festival Iberoamericano de Teatro | "#podcastdianauribe #\nBogotá celebra alrededor de la cultura. El invitado en esta ocasión es el F(...TRUNCATED) | " Hoy vamos a ver uno de los eventos con los que, digamos, más personalmente estoy ligada de todas (...TRUNCATED) | "[{\"start\": 0.0, \"end\": 9.72, \"text\": \" Hoy vamos a ver uno de los eventos con los que, digam(...TRUNCATED) |
Diana Uribe | https://www.youtube.com/watch?v=_e2AY70KrKU | Adiós, señor Haffmann | "#miercolesdecine \nAdiós, señor Haffmann\nParís, 1942. François Mercier es un hombre corriente (...TRUNCATED) | " Buenas, hoy en mi árcoles de cine les tenemos una cantidad de dilemas éticos, es una película f(...TRUNCATED) | "[{\"start\": 0.0, \"end\": 18.48, \"text\": \" Buenas, hoy en mi \\u00e1rcoles de cine les tenemos (...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 38